0s autopkgtest [10:25:31]: starting date and time: 2024-06-16 10:25:31+0000 0s autopkgtest [10:25:31]: git checkout: 433ed4cb Merge branch 'skia/nova_flock' into 'ubuntu/5.34+prod' 0s autopkgtest [10:25:31]: host juju-7f2275-prod-proposed-migration-environment-3; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.pyahu3a4/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:traitlets --apt-upgrade jupyter-notebook --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=traitlets/5.14.3-1 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-3@bos01-ppc64el-16.secgroup --name adt-oracular-ppc64el-jupyter-notebook-20240616-102531-juju-7f2275-prod-proposed-migration-environment-3-f7666d8f-c4c0-4137-95eb-491025808bae --image adt/ubuntu-oracular-ppc64el-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-3 --net-id=net_prod-proposed-migration -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com'"'"'' --mirror=http://us.ports.ubuntu.com/ubuntu-ports/ 136s autopkgtest [10:27:47]: testbed dpkg architecture: ppc64el 136s autopkgtest [10:27:47]: testbed apt version: 2.9.5 136s autopkgtest [10:27:47]: @@@@@@@@@@@@@@@@@@@@ test bed setup 137s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [110 kB] 137s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [36.1 kB] 138s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [389 kB] 138s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [2576 B] 138s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [7052 B] 138s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main ppc64el Packages [42.8 kB] 138s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/restricted ppc64el Packages [1860 B] 138s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/universe ppc64el Packages [312 kB] 138s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse ppc64el Packages [2532 B] 138s Fetched 905 kB in 1s (1014 kB/s) 138s Reading package lists... 140s Reading package lists... 140s Building dependency tree... 140s Reading state information... 141s Calculating upgrade... 141s The following packages will be upgraded: 141s libldap-common libldap2 141s 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 141s Need to get 262 kB of archives. 141s After this operation, 0 B of additional disk space will be used. 141s Get:1 http://ftpmaster.internal/ubuntu oracular/main ppc64el libldap-common all 2.6.7+dfsg-1~exp1ubuntu9 [31.5 kB] 141s Get:2 http://ftpmaster.internal/ubuntu oracular/main ppc64el libldap2 ppc64el 2.6.7+dfsg-1~exp1ubuntu9 [231 kB] 142s Fetched 262 kB in 1s (519 kB/s) 142s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 72676 files and directories currently installed.) 142s Preparing to unpack .../libldap-common_2.6.7+dfsg-1~exp1ubuntu9_all.deb ... 142s Unpacking libldap-common (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 142s Preparing to unpack .../libldap2_2.6.7+dfsg-1~exp1ubuntu9_ppc64el.deb ... 142s Unpacking libldap2:ppc64el (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 142s Setting up libldap-common (2.6.7+dfsg-1~exp1ubuntu9) ... 142s Setting up libldap2:ppc64el (2.6.7+dfsg-1~exp1ubuntu9) ... 142s Processing triggers for man-db (2.12.1-2) ... 142s Processing triggers for libc-bin (2.39-0ubuntu9) ... 142s Reading package lists... 143s Building dependency tree... 143s Reading state information... 143s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 143s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 143s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 144s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 144s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 145s Reading package lists... 145s Reading package lists... 145s Building dependency tree... 145s Reading state information... 145s Calculating upgrade... 145s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 145s Reading package lists... 145s Building dependency tree... 145s Reading state information... 146s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 148s autopkgtest [10:27:59]: testbed running kernel: Linux 6.8.0-31-generic #31-Ubuntu SMP Sat Apr 20 00:05:55 UTC 2024 148s autopkgtest [10:27:59]: @@@@@@@@@@@@@@@@@@@@ apt-source jupyter-notebook 153s Get:1 http://ftpmaster.internal/ubuntu oracular/universe jupyter-notebook 6.4.12-2.2ubuntu1 (dsc) [3886 B] 153s Get:2 http://ftpmaster.internal/ubuntu oracular/universe jupyter-notebook 6.4.12-2.2ubuntu1 (tar) [8501 kB] 153s Get:3 http://ftpmaster.internal/ubuntu oracular/universe jupyter-notebook 6.4.12-2.2ubuntu1 (diff) [49.6 kB] 154s gpgv: Signature made Thu Feb 15 18:11:52 2024 UTC 154s gpgv: using RSA key D09F8A854F1055BCFC482C4B23566B906047AFC8 154s gpgv: Can't check signature: No public key 154s dpkg-source: warning: cannot verify inline signature for ./jupyter-notebook_6.4.12-2.2ubuntu1.dsc: no acceptable signature found 154s autopkgtest [10:28:05]: testing package jupyter-notebook version 6.4.12-2.2ubuntu1 154s autopkgtest [10:28:05]: build not needed 155s autopkgtest [10:28:06]: test pytest: preparing testbed 156s Reading package lists... 156s Building dependency tree... 156s Reading state information... 157s Starting pkgProblemResolver with broken count: 0 157s Starting 2 pkgProblemResolver with broken count: 0 157s Done 157s The following additional packages will be installed: 157s fonts-font-awesome fonts-glyphicons-halflings fonts-lato fonts-mathjax gdb 157s jupyter-core jupyter-notebook libbabeltrace1 libdebuginfod-common 157s libdebuginfod1t64 libjs-backbone libjs-bootstrap libjs-bootstrap-tour 157s libjs-codemirror libjs-es6-promise libjs-jed libjs-jquery 157s libjs-jquery-typeahead libjs-jquery-ui libjs-marked libjs-mathjax 157s libjs-moment libjs-requirejs libjs-requirejs-text libjs-sphinxdoc 157s libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 libpgm-5.3-0t64 157s libpython3.12t64 libsodium23 libsource-highlight-common 157s libsource-highlight4t64 libzmq5 node-jed python-notebook-doc 157s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 157s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 157s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 157s python3-executing python3-fastjsonschema python3-html5lib python3-iniconfig 157s python3-ipykernel python3-ipython python3-ipython-genutils python3-jedi 157s python3-jupyter-client python3-jupyter-core python3-jupyterlab-pygments 157s python3-matplotlib-inline python3-mistune python3-nbclient python3-nbconvert 157s python3-nbformat python3-nest-asyncio python3-notebook python3-packaging 157s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 157s python3-pluggy python3-prometheus-client python3-prompt-toolkit 157s python3-psutil python3-ptyprocess python3-pure-eval python3-py 157s python3-pydevd python3-pytest python3-requests-unixsocket python3-send2trash 157s python3-soupsieve python3-stack-data python3-terminado python3-tinycss2 157s python3-tornado python3-traitlets python3-typeshed python3-wcwidth 157s python3-webencodings python3-zmq sphinx-rtd-theme-common 157s Suggested packages: 157s gdb-doc gdbserver libjs-jquery-lazyload libjs-json libjs-jquery-ui-docs 157s fonts-mathjax-extras fonts-stix libjs-mathjax-doc python-argon2-doc 157s python-bleach-doc python-bytecode-doc python-coverage-doc 157s python-fastjsonschema-doc python3-genshi python3-lxml python-ipython-doc 157s python3-pip python-nbconvert-doc texlive-fonts-recommended 157s texlive-plain-generic texlive-xetex python-pexpect-doc subversion pydevd 157s python-terminado-doc python-tinycss2-doc python3-pycurl python-tornado-doc 157s python3-twisted 157s Recommended packages: 157s libc-dbg javascript-common python3-lxml python3-matplotlib pandoc 157s python3-ipywidgets 157s The following NEW packages will be installed: 157s autopkgtest-satdep fonts-font-awesome fonts-glyphicons-halflings fonts-lato 157s fonts-mathjax gdb jupyter-core jupyter-notebook libbabeltrace1 157s libdebuginfod-common libdebuginfod1t64 libjs-backbone libjs-bootstrap 157s libjs-bootstrap-tour libjs-codemirror libjs-es6-promise libjs-jed 157s libjs-jquery libjs-jquery-typeahead libjs-jquery-ui libjs-marked 157s libjs-mathjax libjs-moment libjs-requirejs libjs-requirejs-text 157s libjs-sphinxdoc libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 157s libpgm-5.3-0t64 libpython3.12t64 libsodium23 libsource-highlight-common 157s libsource-highlight4t64 libzmq5 node-jed python-notebook-doc 157s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 157s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 157s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 157s python3-executing python3-fastjsonschema python3-html5lib python3-iniconfig 157s python3-ipykernel python3-ipython python3-ipython-genutils python3-jedi 157s python3-jupyter-client python3-jupyter-core python3-jupyterlab-pygments 157s python3-matplotlib-inline python3-mistune python3-nbclient python3-nbconvert 157s python3-nbformat python3-nest-asyncio python3-notebook python3-packaging 157s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 157s python3-pluggy python3-prometheus-client python3-prompt-toolkit 157s python3-psutil python3-ptyprocess python3-pure-eval python3-py 157s python3-pydevd python3-pytest python3-requests-unixsocket python3-send2trash 157s python3-soupsieve python3-stack-data python3-terminado python3-tinycss2 157s python3-tornado python3-traitlets python3-typeshed python3-wcwidth 157s python3-webencodings python3-zmq sphinx-rtd-theme-common 157s 0 upgraded, 96 newly installed, 0 to remove and 0 not upgraded. 157s Need to get 34.9 MB/34.9 MB of archives. 157s After this operation, 184 MB of additional disk space will be used. 157s Get:1 /tmp/autopkgtest.E327Mm/1-autopkgtest-satdep.deb autopkgtest-satdep ppc64el 0 [752 B] 157s Get:2 http://ftpmaster.internal/ubuntu oracular/main ppc64el fonts-lato all 2.015-1 [2781 kB] 159s Get:3 http://ftpmaster.internal/ubuntu oracular/main ppc64el libdebuginfod-common all 0.191-1 [14.6 kB] 159s Get:4 http://ftpmaster.internal/ubuntu oracular/main ppc64el fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 159s Get:5 http://ftpmaster.internal/ubuntu oracular/universe ppc64el fonts-glyphicons-halflings all 1.009~3.4.1+dfsg-3 [118 kB] 159s Get:6 http://ftpmaster.internal/ubuntu oracular/main ppc64el fonts-mathjax all 2.7.9+dfsg-1 [2208 kB] 160s Get:7 http://ftpmaster.internal/ubuntu oracular/main ppc64el libbabeltrace1 ppc64el 1.5.11-3build3 [209 kB] 160s Get:8 http://ftpmaster.internal/ubuntu oracular/main ppc64el libdebuginfod1t64 ppc64el 0.191-1 [18.4 kB] 160s Get:9 http://ftpmaster.internal/ubuntu oracular/main ppc64el libpython3.12t64 ppc64el 3.12.4-1 [2542 kB] 160s Get:10 http://ftpmaster.internal/ubuntu oracular/main ppc64el libsource-highlight-common all 3.1.9-4.3build1 [64.2 kB] 160s Get:11 http://ftpmaster.internal/ubuntu oracular/main ppc64el libsource-highlight4t64 ppc64el 3.1.9-4.3build1 [288 kB] 160s Get:12 http://ftpmaster.internal/ubuntu oracular/main ppc64el gdb ppc64el 15.0.50.20240403-0ubuntu1 [5088 kB] 162s Get:13 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-platformdirs all 4.2.1-1 [16.3 kB] 162s Get:14 http://ftpmaster.internal/ubuntu oracular-proposed/universe ppc64el python3-traitlets all 5.14.3-1 [71.3 kB] 162s Get:15 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jupyter-core all 5.3.2-2 [25.5 kB] 162s Get:16 http://ftpmaster.internal/ubuntu oracular/universe ppc64el jupyter-core all 5.3.2-2 [4038 B] 162s Get:17 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 162s Get:18 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-backbone all 1.4.1~dfsg+~1.4.15-3 [185 kB] 162s Get:19 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-bootstrap all 3.4.1+dfsg-3 [129 kB] 162s Get:20 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 162s Get:21 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-bootstrap-tour all 0.12.0+dfsg-5 [21.4 kB] 162s Get:22 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-codemirror all 5.65.0+~cs5.83.9-3 [755 kB] 162s Get:23 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-es6-promise all 4.2.8-12 [14.1 kB] 162s Get:24 http://ftpmaster.internal/ubuntu oracular/universe ppc64el node-jed all 1.1.1-4 [15.2 kB] 162s Get:25 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-jed all 1.1.1-4 [2584 B] 162s Get:26 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-jquery-typeahead all 2.11.0+dfsg1-3 [48.9 kB] 162s Get:27 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-jquery-ui all 1.13.2+dfsg-1 [252 kB] 162s Get:28 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-marked all 4.2.3+ds+~4.0.7-3 [36.2 kB] 162s Get:29 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-mathjax all 2.7.9+dfsg-1 [5665 kB] 164s Get:30 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-moment all 2.29.4+ds-1 [147 kB] 164s Get:31 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-requirejs all 2.3.6+ds+~2.1.37-1 [201 kB] 164s Get:32 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-requirejs-text all 2.0.12-1.1 [9056 B] 164s Get:33 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-text-encoding all 0.7.0-5 [140 kB] 164s Get:34 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-xterm all 5.3.0-2 [476 kB] 164s Get:35 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-ptyprocess all 0.7.0-5 [15.1 kB] 164s Get:36 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-tornado ppc64el 6.4.1-1 [298 kB] 164s Get:37 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-terminado all 0.18.1-1 [13.2 kB] 164s Get:38 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-argon2 ppc64el 21.1.0-2build1 [21.7 kB] 164s Get:39 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-comm all 0.2.1-1 [7016 B] 164s Get:40 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-bytecode all 0.15.1-3 [44.7 kB] 164s Get:41 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-coverage ppc64el 7.4.4+dfsg1-0ubuntu2 [149 kB] 164s Get:42 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pydevd ppc64el 2.10.0+ds-10ubuntu1 [655 kB] 164s Get:43 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-debugpy all 1.8.0+ds-4ubuntu4 [67.6 kB] 165s Get:44 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-decorator all 5.1.1-5 [10.1 kB] 165s Get:45 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-parso all 0.8.3-1 [67.2 kB] 165s Get:46 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-typeshed all 0.0~git20231111.6764465-3 [1274 kB] 165s Get:47 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jedi all 0.19.1+ds1-1 [693 kB] 165s Get:48 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-matplotlib-inline all 0.1.6-2 [8784 B] 165s Get:49 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-pexpect all 4.9-2 [48.1 kB] 165s Get:50 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 165s Get:51 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-prompt-toolkit all 3.0.46-1 [256 kB] 165s Get:52 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-asttokens all 2.4.1-1 [20.9 kB] 165s Get:53 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-executing all 2.0.1-0.1 [23.3 kB] 165s Get:54 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pure-eval all 0.2.2-2 [11.1 kB] 165s Get:55 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-stack-data all 0.6.3-1 [22.0 kB] 165s Get:56 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-ipython all 8.20.0-1ubuntu1 [561 kB] 165s Get:57 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-dateutil all 2.9.0-2 [80.3 kB] 165s Get:58 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-entrypoints all 0.4-2 [7146 B] 165s Get:59 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nest-asyncio all 1.5.4-1 [6256 B] 165s Get:60 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-py all 1.11.0-2 [72.7 kB] 165s Get:61 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libnorm1t64 ppc64el 1.5.9+dfsg-3.1build1 [194 kB] 165s Get:62 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libpgm-5.3-0t64 ppc64el 5.3.128~dfsg-2.1build1 [185 kB] 165s Get:63 http://ftpmaster.internal/ubuntu oracular/main ppc64el libsodium23 ppc64el 1.0.18-1build3 [150 kB] 165s Get:64 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libzmq5 ppc64el 4.3.5-1build2 [297 kB] 166s Get:65 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-zmq ppc64el 24.0.1-5build1 [316 kB] 166s Get:66 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jupyter-client all 7.4.9-2ubuntu1 [90.5 kB] 166s Get:67 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-packaging all 24.0-1 [41.1 kB] 166s Get:68 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-psutil ppc64el 5.9.8-2build2 [197 kB] 166s Get:69 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-ipykernel all 6.29.3-1ubuntu1 [82.6 kB] 166s Get:70 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-ipython-genutils all 0.2.0-6 [22.0 kB] 166s Get:71 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-webencodings all 0.5.1-5 [11.5 kB] 166s Get:72 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-html5lib all 1.1-6 [88.8 kB] 166s Get:73 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-bleach all 6.1.0-2 [49.6 kB] 166s Get:74 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-soupsieve all 2.5-1 [33.0 kB] 166s Get:75 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-bs4 all 4.12.3-1 [109 kB] 166s Get:76 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-defusedxml all 0.7.1-2 [42.0 kB] 166s Get:77 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jupyterlab-pygments all 0.2.2-3 [6054 B] 166s Get:78 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-mistune all 3.0.2-1 [32.8 kB] 166s Get:79 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-fastjsonschema all 2.19.1-1 [19.7 kB] 166s Get:80 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nbformat all 5.9.1-1 [41.2 kB] 166s Get:81 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nbclient all 0.8.0-1 [55.6 kB] 166s Get:82 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pandocfilters all 1.5.1-1 [23.6 kB] 166s Get:83 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python-tinycss2-common all 1.3.0-1 [34.1 kB] 166s Get:84 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-tinycss2 all 1.3.0-1 [19.6 kB] 166s Get:85 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nbconvert all 7.16.4-1 [156 kB] 166s Get:86 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-prometheus-client all 0.19.0+ds1-1 [41.7 kB] 166s Get:87 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-send2trash all 1.8.2-1 [15.5 kB] 166s Get:88 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-notebook all 6.4.12-2.2ubuntu1 [1566 kB] 166s Get:89 http://ftpmaster.internal/ubuntu oracular/universe ppc64el jupyter-notebook all 6.4.12-2.2ubuntu1 [10.4 kB] 166s Get:90 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-sphinxdoc all 7.2.6-8 [150 kB] 166s Get:91 http://ftpmaster.internal/ubuntu oracular/main ppc64el sphinx-rtd-theme-common all 2.0.0+dfsg-1 [1012 kB] 167s Get:92 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python-notebook-doc all 6.4.12-2.2ubuntu1 [2540 kB] 167s Get:93 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-iniconfig all 1.1.1-2 [6024 B] 167s Get:94 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pluggy all 1.5.0-1 [21.0 kB] 167s Get:95 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pytest all 7.4.4-1 [305 kB] 167s Get:96 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-requests-unixsocket all 0.3.0-4 [7274 B] 168s Preconfiguring packages ... 168s Fetched 34.9 MB in 10s (3384 kB/s) 168s Selecting previously unselected package fonts-lato. 168s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 72676 files and directories currently installed.) 168s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 168s Unpacking fonts-lato (2.015-1) ... 168s Selecting previously unselected package libdebuginfod-common. 168s Preparing to unpack .../01-libdebuginfod-common_0.191-1_all.deb ... 168s Unpacking libdebuginfod-common (0.191-1) ... 168s Selecting previously unselected package fonts-font-awesome. 168s Preparing to unpack .../02-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 168s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 168s Selecting previously unselected package fonts-glyphicons-halflings. 168s Preparing to unpack .../03-fonts-glyphicons-halflings_1.009~3.4.1+dfsg-3_all.deb ... 168s Unpacking fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 168s Selecting previously unselected package fonts-mathjax. 168s Preparing to unpack .../04-fonts-mathjax_2.7.9+dfsg-1_all.deb ... 168s Unpacking fonts-mathjax (2.7.9+dfsg-1) ... 169s Selecting previously unselected package libbabeltrace1:ppc64el. 169s Preparing to unpack .../05-libbabeltrace1_1.5.11-3build3_ppc64el.deb ... 169s Unpacking libbabeltrace1:ppc64el (1.5.11-3build3) ... 169s Selecting previously unselected package libdebuginfod1t64:ppc64el. 169s Preparing to unpack .../06-libdebuginfod1t64_0.191-1_ppc64el.deb ... 169s Unpacking libdebuginfod1t64:ppc64el (0.191-1) ... 169s Selecting previously unselected package libpython3.12t64:ppc64el. 169s Preparing to unpack .../07-libpython3.12t64_3.12.4-1_ppc64el.deb ... 169s Unpacking libpython3.12t64:ppc64el (3.12.4-1) ... 169s Selecting previously unselected package libsource-highlight-common. 169s Preparing to unpack .../08-libsource-highlight-common_3.1.9-4.3build1_all.deb ... 169s Unpacking libsource-highlight-common (3.1.9-4.3build1) ... 169s Selecting previously unselected package libsource-highlight4t64:ppc64el. 169s Preparing to unpack .../09-libsource-highlight4t64_3.1.9-4.3build1_ppc64el.deb ... 169s Unpacking libsource-highlight4t64:ppc64el (3.1.9-4.3build1) ... 169s Selecting previously unselected package gdb. 169s Preparing to unpack .../10-gdb_15.0.50.20240403-0ubuntu1_ppc64el.deb ... 169s Unpacking gdb (15.0.50.20240403-0ubuntu1) ... 169s Selecting previously unselected package python3-platformdirs. 169s Preparing to unpack .../11-python3-platformdirs_4.2.1-1_all.deb ... 169s Unpacking python3-platformdirs (4.2.1-1) ... 169s Selecting previously unselected package python3-traitlets. 169s Preparing to unpack .../12-python3-traitlets_5.14.3-1_all.deb ... 169s Unpacking python3-traitlets (5.14.3-1) ... 169s Selecting previously unselected package python3-jupyter-core. 169s Preparing to unpack .../13-python3-jupyter-core_5.3.2-2_all.deb ... 169s Unpacking python3-jupyter-core (5.3.2-2) ... 169s Selecting previously unselected package jupyter-core. 169s Preparing to unpack .../14-jupyter-core_5.3.2-2_all.deb ... 169s Unpacking jupyter-core (5.3.2-2) ... 169s Selecting previously unselected package libjs-underscore. 169s Preparing to unpack .../15-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 169s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 169s Selecting previously unselected package libjs-backbone. 169s Preparing to unpack .../16-libjs-backbone_1.4.1~dfsg+~1.4.15-3_all.deb ... 169s Unpacking libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 169s Selecting previously unselected package libjs-bootstrap. 169s Preparing to unpack .../17-libjs-bootstrap_3.4.1+dfsg-3_all.deb ... 169s Unpacking libjs-bootstrap (3.4.1+dfsg-3) ... 169s Selecting previously unselected package libjs-jquery. 169s Preparing to unpack .../18-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 169s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 169s Selecting previously unselected package libjs-bootstrap-tour. 169s Preparing to unpack .../19-libjs-bootstrap-tour_0.12.0+dfsg-5_all.deb ... 169s Unpacking libjs-bootstrap-tour (0.12.0+dfsg-5) ... 169s Selecting previously unselected package libjs-codemirror. 169s Preparing to unpack .../20-libjs-codemirror_5.65.0+~cs5.83.9-3_all.deb ... 169s Unpacking libjs-codemirror (5.65.0+~cs5.83.9-3) ... 169s Selecting previously unselected package libjs-es6-promise. 169s Preparing to unpack .../21-libjs-es6-promise_4.2.8-12_all.deb ... 169s Unpacking libjs-es6-promise (4.2.8-12) ... 169s Selecting previously unselected package node-jed. 169s Preparing to unpack .../22-node-jed_1.1.1-4_all.deb ... 169s Unpacking node-jed (1.1.1-4) ... 169s Selecting previously unselected package libjs-jed. 169s Preparing to unpack .../23-libjs-jed_1.1.1-4_all.deb ... 169s Unpacking libjs-jed (1.1.1-4) ... 169s Selecting previously unselected package libjs-jquery-typeahead. 169s Preparing to unpack .../24-libjs-jquery-typeahead_2.11.0+dfsg1-3_all.deb ... 169s Unpacking libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 169s Selecting previously unselected package libjs-jquery-ui. 169s Preparing to unpack .../25-libjs-jquery-ui_1.13.2+dfsg-1_all.deb ... 169s Unpacking libjs-jquery-ui (1.13.2+dfsg-1) ... 170s Selecting previously unselected package libjs-marked. 170s Preparing to unpack .../26-libjs-marked_4.2.3+ds+~4.0.7-3_all.deb ... 170s Unpacking libjs-marked (4.2.3+ds+~4.0.7-3) ... 170s Selecting previously unselected package libjs-mathjax. 170s Preparing to unpack .../27-libjs-mathjax_2.7.9+dfsg-1_all.deb ... 170s Unpacking libjs-mathjax (2.7.9+dfsg-1) ... 171s Selecting previously unselected package libjs-moment. 171s Preparing to unpack .../28-libjs-moment_2.29.4+ds-1_all.deb ... 171s Unpacking libjs-moment (2.29.4+ds-1) ... 171s Selecting previously unselected package libjs-requirejs. 171s Preparing to unpack .../29-libjs-requirejs_2.3.6+ds+~2.1.37-1_all.deb ... 171s Unpacking libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 171s Selecting previously unselected package libjs-requirejs-text. 171s Preparing to unpack .../30-libjs-requirejs-text_2.0.12-1.1_all.deb ... 171s Unpacking libjs-requirejs-text (2.0.12-1.1) ... 171s Selecting previously unselected package libjs-text-encoding. 171s Preparing to unpack .../31-libjs-text-encoding_0.7.0-5_all.deb ... 171s Unpacking libjs-text-encoding (0.7.0-5) ... 171s Selecting previously unselected package libjs-xterm. 171s Preparing to unpack .../32-libjs-xterm_5.3.0-2_all.deb ... 171s Unpacking libjs-xterm (5.3.0-2) ... 171s Selecting previously unselected package python3-ptyprocess. 171s Preparing to unpack .../33-python3-ptyprocess_0.7.0-5_all.deb ... 171s Unpacking python3-ptyprocess (0.7.0-5) ... 171s Selecting previously unselected package python3-tornado. 171s Preparing to unpack .../34-python3-tornado_6.4.1-1_ppc64el.deb ... 171s Unpacking python3-tornado (6.4.1-1) ... 171s Selecting previously unselected package python3-terminado. 171s Preparing to unpack .../35-python3-terminado_0.18.1-1_all.deb ... 171s Unpacking python3-terminado (0.18.1-1) ... 171s Selecting previously unselected package python3-argon2. 171s Preparing to unpack .../36-python3-argon2_21.1.0-2build1_ppc64el.deb ... 171s Unpacking python3-argon2 (21.1.0-2build1) ... 171s Selecting previously unselected package python3-comm. 171s Preparing to unpack .../37-python3-comm_0.2.1-1_all.deb ... 171s Unpacking python3-comm (0.2.1-1) ... 171s Selecting previously unselected package python3-bytecode. 171s Preparing to unpack .../38-python3-bytecode_0.15.1-3_all.deb ... 171s Unpacking python3-bytecode (0.15.1-3) ... 171s Selecting previously unselected package python3-coverage. 171s Preparing to unpack .../39-python3-coverage_7.4.4+dfsg1-0ubuntu2_ppc64el.deb ... 171s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 171s Selecting previously unselected package python3-pydevd. 171s Preparing to unpack .../40-python3-pydevd_2.10.0+ds-10ubuntu1_ppc64el.deb ... 171s Unpacking python3-pydevd (2.10.0+ds-10ubuntu1) ... 171s Selecting previously unselected package python3-debugpy. 171s Preparing to unpack .../41-python3-debugpy_1.8.0+ds-4ubuntu4_all.deb ... 171s Unpacking python3-debugpy (1.8.0+ds-4ubuntu4) ... 171s Selecting previously unselected package python3-decorator. 171s Preparing to unpack .../42-python3-decorator_5.1.1-5_all.deb ... 171s Unpacking python3-decorator (5.1.1-5) ... 171s Selecting previously unselected package python3-parso. 171s Preparing to unpack .../43-python3-parso_0.8.3-1_all.deb ... 171s Unpacking python3-parso (0.8.3-1) ... 171s Selecting previously unselected package python3-typeshed. 171s Preparing to unpack .../44-python3-typeshed_0.0~git20231111.6764465-3_all.deb ... 171s Unpacking python3-typeshed (0.0~git20231111.6764465-3) ... 172s Selecting previously unselected package python3-jedi. 172s Preparing to unpack .../45-python3-jedi_0.19.1+ds1-1_all.deb ... 172s Unpacking python3-jedi (0.19.1+ds1-1) ... 172s Selecting previously unselected package python3-matplotlib-inline. 172s Preparing to unpack .../46-python3-matplotlib-inline_0.1.6-2_all.deb ... 172s Unpacking python3-matplotlib-inline (0.1.6-2) ... 172s Selecting previously unselected package python3-pexpect. 172s Preparing to unpack .../47-python3-pexpect_4.9-2_all.deb ... 172s Unpacking python3-pexpect (4.9-2) ... 173s Selecting previously unselected package python3-wcwidth. 173s Preparing to unpack .../48-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 173s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 173s Selecting previously unselected package python3-prompt-toolkit. 173s Preparing to unpack .../49-python3-prompt-toolkit_3.0.46-1_all.deb ... 173s Unpacking python3-prompt-toolkit (3.0.46-1) ... 173s Selecting previously unselected package python3-asttokens. 173s Preparing to unpack .../50-python3-asttokens_2.4.1-1_all.deb ... 173s Unpacking python3-asttokens (2.4.1-1) ... 173s Selecting previously unselected package python3-executing. 173s Preparing to unpack .../51-python3-executing_2.0.1-0.1_all.deb ... 173s Unpacking python3-executing (2.0.1-0.1) ... 173s Selecting previously unselected package python3-pure-eval. 173s Preparing to unpack .../52-python3-pure-eval_0.2.2-2_all.deb ... 173s Unpacking python3-pure-eval (0.2.2-2) ... 173s Selecting previously unselected package python3-stack-data. 173s Preparing to unpack .../53-python3-stack-data_0.6.3-1_all.deb ... 173s Unpacking python3-stack-data (0.6.3-1) ... 173s Selecting previously unselected package python3-ipython. 173s Preparing to unpack .../54-python3-ipython_8.20.0-1ubuntu1_all.deb ... 173s Unpacking python3-ipython (8.20.0-1ubuntu1) ... 173s Selecting previously unselected package python3-dateutil. 173s Preparing to unpack .../55-python3-dateutil_2.9.0-2_all.deb ... 173s Unpacking python3-dateutil (2.9.0-2) ... 173s Selecting previously unselected package python3-entrypoints. 173s Preparing to unpack .../56-python3-entrypoints_0.4-2_all.deb ... 173s Unpacking python3-entrypoints (0.4-2) ... 173s Selecting previously unselected package python3-nest-asyncio. 173s Preparing to unpack .../57-python3-nest-asyncio_1.5.4-1_all.deb ... 173s Unpacking python3-nest-asyncio (1.5.4-1) ... 173s Selecting previously unselected package python3-py. 173s Preparing to unpack .../58-python3-py_1.11.0-2_all.deb ... 173s Unpacking python3-py (1.11.0-2) ... 173s Selecting previously unselected package libnorm1t64:ppc64el. 173s Preparing to unpack .../59-libnorm1t64_1.5.9+dfsg-3.1build1_ppc64el.deb ... 173s Unpacking libnorm1t64:ppc64el (1.5.9+dfsg-3.1build1) ... 173s Selecting previously unselected package libpgm-5.3-0t64:ppc64el. 173s Preparing to unpack .../60-libpgm-5.3-0t64_5.3.128~dfsg-2.1build1_ppc64el.deb ... 173s Unpacking libpgm-5.3-0t64:ppc64el (5.3.128~dfsg-2.1build1) ... 173s Selecting previously unselected package libsodium23:ppc64el. 173s Preparing to unpack .../61-libsodium23_1.0.18-1build3_ppc64el.deb ... 173s Unpacking libsodium23:ppc64el (1.0.18-1build3) ... 173s Selecting previously unselected package libzmq5:ppc64el. 173s Preparing to unpack .../62-libzmq5_4.3.5-1build2_ppc64el.deb ... 173s Unpacking libzmq5:ppc64el (4.3.5-1build2) ... 173s Selecting previously unselected package python3-zmq. 173s Preparing to unpack .../63-python3-zmq_24.0.1-5build1_ppc64el.deb ... 173s Unpacking python3-zmq (24.0.1-5build1) ... 173s Selecting previously unselected package python3-jupyter-client. 173s Preparing to unpack .../64-python3-jupyter-client_7.4.9-2ubuntu1_all.deb ... 173s Unpacking python3-jupyter-client (7.4.9-2ubuntu1) ... 173s Selecting previously unselected package python3-packaging. 173s Preparing to unpack .../65-python3-packaging_24.0-1_all.deb ... 173s Unpacking python3-packaging (24.0-1) ... 173s Selecting previously unselected package python3-psutil. 173s Preparing to unpack .../66-python3-psutil_5.9.8-2build2_ppc64el.deb ... 173s Unpacking python3-psutil (5.9.8-2build2) ... 173s Selecting previously unselected package python3-ipykernel. 173s Preparing to unpack .../67-python3-ipykernel_6.29.3-1ubuntu1_all.deb ... 173s Unpacking python3-ipykernel (6.29.3-1ubuntu1) ... 173s Selecting previously unselected package python3-ipython-genutils. 173s Preparing to unpack .../68-python3-ipython-genutils_0.2.0-6_all.deb ... 173s Unpacking python3-ipython-genutils (0.2.0-6) ... 173s Selecting previously unselected package python3-webencodings. 173s Preparing to unpack .../69-python3-webencodings_0.5.1-5_all.deb ... 173s Unpacking python3-webencodings (0.5.1-5) ... 173s Selecting previously unselected package python3-html5lib. 173s Preparing to unpack .../70-python3-html5lib_1.1-6_all.deb ... 173s Unpacking python3-html5lib (1.1-6) ... 173s Selecting previously unselected package python3-bleach. 173s Preparing to unpack .../71-python3-bleach_6.1.0-2_all.deb ... 173s Unpacking python3-bleach (6.1.0-2) ... 173s Selecting previously unselected package python3-soupsieve. 173s Preparing to unpack .../72-python3-soupsieve_2.5-1_all.deb ... 173s Unpacking python3-soupsieve (2.5-1) ... 173s Selecting previously unselected package python3-bs4. 173s Preparing to unpack .../73-python3-bs4_4.12.3-1_all.deb ... 173s Unpacking python3-bs4 (4.12.3-1) ... 173s Selecting previously unselected package python3-defusedxml. 173s Preparing to unpack .../74-python3-defusedxml_0.7.1-2_all.deb ... 173s Unpacking python3-defusedxml (0.7.1-2) ... 173s Selecting previously unselected package python3-jupyterlab-pygments. 173s Preparing to unpack .../75-python3-jupyterlab-pygments_0.2.2-3_all.deb ... 173s Unpacking python3-jupyterlab-pygments (0.2.2-3) ... 173s Selecting previously unselected package python3-mistune. 173s Preparing to unpack .../76-python3-mistune_3.0.2-1_all.deb ... 173s Unpacking python3-mistune (3.0.2-1) ... 173s Selecting previously unselected package python3-fastjsonschema. 173s Preparing to unpack .../77-python3-fastjsonschema_2.19.1-1_all.deb ... 173s Unpacking python3-fastjsonschema (2.19.1-1) ... 173s Selecting previously unselected package python3-nbformat. 173s Preparing to unpack .../78-python3-nbformat_5.9.1-1_all.deb ... 173s Unpacking python3-nbformat (5.9.1-1) ... 173s Selecting previously unselected package python3-nbclient. 173s Preparing to unpack .../79-python3-nbclient_0.8.0-1_all.deb ... 173s Unpacking python3-nbclient (0.8.0-1) ... 173s Selecting previously unselected package python3-pandocfilters. 173s Preparing to unpack .../80-python3-pandocfilters_1.5.1-1_all.deb ... 173s Unpacking python3-pandocfilters (1.5.1-1) ... 173s Selecting previously unselected package python-tinycss2-common. 173s Preparing to unpack .../81-python-tinycss2-common_1.3.0-1_all.deb ... 173s Unpacking python-tinycss2-common (1.3.0-1) ... 173s Selecting previously unselected package python3-tinycss2. 173s Preparing to unpack .../82-python3-tinycss2_1.3.0-1_all.deb ... 173s Unpacking python3-tinycss2 (1.3.0-1) ... 173s Selecting previously unselected package python3-nbconvert. 173s Preparing to unpack .../83-python3-nbconvert_7.16.4-1_all.deb ... 173s Unpacking python3-nbconvert (7.16.4-1) ... 173s Selecting previously unselected package python3-prometheus-client. 174s Preparing to unpack .../84-python3-prometheus-client_0.19.0+ds1-1_all.deb ... 174s Unpacking python3-prometheus-client (0.19.0+ds1-1) ... 174s Selecting previously unselected package python3-send2trash. 174s Preparing to unpack .../85-python3-send2trash_1.8.2-1_all.deb ... 174s Unpacking python3-send2trash (1.8.2-1) ... 174s Selecting previously unselected package python3-notebook. 174s Preparing to unpack .../86-python3-notebook_6.4.12-2.2ubuntu1_all.deb ... 174s Unpacking python3-notebook (6.4.12-2.2ubuntu1) ... 174s Selecting previously unselected package jupyter-notebook. 174s Preparing to unpack .../87-jupyter-notebook_6.4.12-2.2ubuntu1_all.deb ... 174s Unpacking jupyter-notebook (6.4.12-2.2ubuntu1) ... 174s Selecting previously unselected package libjs-sphinxdoc. 174s Preparing to unpack .../88-libjs-sphinxdoc_7.2.6-8_all.deb ... 174s Unpacking libjs-sphinxdoc (7.2.6-8) ... 174s Selecting previously unselected package sphinx-rtd-theme-common. 174s Preparing to unpack .../89-sphinx-rtd-theme-common_2.0.0+dfsg-1_all.deb ... 174s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 174s Selecting previously unselected package python-notebook-doc. 174s Preparing to unpack .../90-python-notebook-doc_6.4.12-2.2ubuntu1_all.deb ... 174s Unpacking python-notebook-doc (6.4.12-2.2ubuntu1) ... 174s Selecting previously unselected package python3-iniconfig. 174s Preparing to unpack .../91-python3-iniconfig_1.1.1-2_all.deb ... 174s Unpacking python3-iniconfig (1.1.1-2) ... 174s Selecting previously unselected package python3-pluggy. 174s Preparing to unpack .../92-python3-pluggy_1.5.0-1_all.deb ... 174s Unpacking python3-pluggy (1.5.0-1) ... 174s Selecting previously unselected package python3-pytest. 174s Preparing to unpack .../93-python3-pytest_7.4.4-1_all.deb ... 174s Unpacking python3-pytest (7.4.4-1) ... 174s Selecting previously unselected package python3-requests-unixsocket. 174s Preparing to unpack .../94-python3-requests-unixsocket_0.3.0-4_all.deb ... 174s Unpacking python3-requests-unixsocket (0.3.0-4) ... 174s Selecting previously unselected package autopkgtest-satdep. 174s Preparing to unpack .../95-1-autopkgtest-satdep.deb ... 174s Unpacking autopkgtest-satdep (0) ... 174s Setting up python3-entrypoints (0.4-2) ... 174s Setting up libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 174s Setting up python3-iniconfig (1.1.1-2) ... 174s Setting up python3-tornado (6.4.1-1) ... 175s Setting up libnorm1t64:ppc64el (1.5.9+dfsg-3.1build1) ... 175s Setting up python3-pure-eval (0.2.2-2) ... 175s Setting up python3-send2trash (1.8.2-1) ... 175s Setting up fonts-lato (2.015-1) ... 175s Setting up fonts-mathjax (2.7.9+dfsg-1) ... 175s Setting up libsodium23:ppc64el (1.0.18-1build3) ... 175s Setting up libjs-mathjax (2.7.9+dfsg-1) ... 175s Setting up python3-py (1.11.0-2) ... 176s Setting up libdebuginfod-common (0.191-1) ... 176s Setting up libjs-requirejs-text (2.0.12-1.1) ... 176s Setting up python3-parso (0.8.3-1) ... 176s Setting up python3-defusedxml (0.7.1-2) ... 176s Setting up python3-ipython-genutils (0.2.0-6) ... 176s Setting up python3-asttokens (2.4.1-1) ... 176s Setting up fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 176s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 177s Setting up libjs-moment (2.29.4+ds-1) ... 177s Setting up python3-pandocfilters (1.5.1-1) ... 177s Setting up libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 177s Setting up libjs-es6-promise (4.2.8-12) ... 177s Setting up libjs-text-encoding (0.7.0-5) ... 177s Setting up python3-webencodings (0.5.1-5) ... 177s Setting up python3-platformdirs (4.2.1-1) ... 177s Setting up python3-psutil (5.9.8-2build2) ... 178s Setting up libsource-highlight-common (3.1.9-4.3build1) ... 178s Setting up python3-requests-unixsocket (0.3.0-4) ... 178s Setting up python3-jupyterlab-pygments (0.2.2-3) ... 178s Setting up libpython3.12t64:ppc64el (3.12.4-1) ... 178s Setting up libpgm-5.3-0t64:ppc64el (5.3.128~dfsg-2.1build1) ... 178s Setting up python3-decorator (5.1.1-5) ... 178s Setting up python3-packaging (24.0-1) ... 178s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 178s Setting up node-jed (1.1.1-4) ... 178s Setting up python3-typeshed (0.0~git20231111.6764465-3) ... 178s Setting up python3-executing (2.0.1-0.1) ... 179s Setting up libjs-xterm (5.3.0-2) ... 179s Setting up python3-nest-asyncio (1.5.4-1) ... 179s Setting up python3-bytecode (0.15.1-3) ... 179s Setting up libjs-codemirror (5.65.0+~cs5.83.9-3) ... 179s Setting up libjs-jed (1.1.1-4) ... 179s Setting up python3-html5lib (1.1-6) ... 179s Setting up libbabeltrace1:ppc64el (1.5.11-3build3) ... 179s Setting up python3-pluggy (1.5.0-1) ... 179s Setting up python3-fastjsonschema (2.19.1-1) ... 180s Setting up python3-traitlets (5.14.3-1) ... 180s Setting up python-tinycss2-common (1.3.0-1) ... 180s Setting up python3-argon2 (21.1.0-2build1) ... 180s Setting up python3-dateutil (2.9.0-2) ... 180s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 180s Setting up python3-mistune (3.0.2-1) ... 180s Setting up python3-stack-data (0.6.3-1) ... 181s Setting up python3-soupsieve (2.5-1) ... 181s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 181s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 181s Setting up python3-jupyter-core (5.3.2-2) ... 181s Setting up libjs-bootstrap (3.4.1+dfsg-3) ... 181s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 181s Setting up python3-ptyprocess (0.7.0-5) ... 181s Setting up libjs-marked (4.2.3+ds+~4.0.7-3) ... 181s Setting up python3-prompt-toolkit (3.0.46-1) ... 182s Setting up libdebuginfod1t64:ppc64el (0.191-1) ... 182s Setting up python3-tinycss2 (1.3.0-1) ... 182s Setting up libzmq5:ppc64el (4.3.5-1build2) ... 182s Setting up python3-jedi (0.19.1+ds1-1) ... 182s Setting up python3-pytest (7.4.4-1) ... 183s Setting up libjs-bootstrap-tour (0.12.0+dfsg-5) ... 183s Setting up libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 183s Setting up libsource-highlight4t64:ppc64el (3.1.9-4.3build1) ... 183s Setting up python3-nbformat (5.9.1-1) ... 183s Setting up python3-bs4 (4.12.3-1) ... 183s Setting up python3-bleach (6.1.0-2) ... 183s Setting up python3-matplotlib-inline (0.1.6-2) ... 183s Setting up python3-comm (0.2.1-1) ... 183s Setting up python3-prometheus-client (0.19.0+ds1-1) ... 184s Setting up gdb (15.0.50.20240403-0ubuntu1) ... 184s Setting up libjs-jquery-ui (1.13.2+dfsg-1) ... 184s Setting up python3-pexpect (4.9-2) ... 184s Setting up python3-zmq (24.0.1-5build1) ... 184s Setting up libjs-sphinxdoc (7.2.6-8) ... 184s Setting up python3-terminado (0.18.1-1) ... 184s Setting up python3-jupyter-client (7.4.9-2ubuntu1) ... 185s Setting up jupyter-core (5.3.2-2) ... 185s Setting up python3-pydevd (2.10.0+ds-10ubuntu1) ... 185s Setting up python3-debugpy (1.8.0+ds-4ubuntu4) ... 185s Setting up python-notebook-doc (6.4.12-2.2ubuntu1) ... 185s Setting up python3-nbclient (0.8.0-1) ... 186s Setting up python3-ipython (8.20.0-1ubuntu1) ... 186s Setting up python3-ipykernel (6.29.3-1ubuntu1) ... 187s Setting up python3-nbconvert (7.16.4-1) ... 187s Setting up python3-notebook (6.4.12-2.2ubuntu1) ... 187s Setting up jupyter-notebook (6.4.12-2.2ubuntu1) ... 187s Setting up autopkgtest-satdep (0) ... 187s Processing triggers for man-db (2.12.1-2) ... 188s Processing triggers for libc-bin (2.39-0ubuntu9) ... 193s (Reading database ... 89292 files and directories currently installed.) 193s Removing autopkgtest-satdep (0) ... 193s autopkgtest [10:28:44]: test pytest: [----------------------- 195s ============================= test session starts ============================== 195s platform linux -- Python 3.12.4, pytest-7.4.4, pluggy-1.5.0 195s rootdir: /tmp/autopkgtest.E327Mm/build.4bM/src 195s collected 330 items / 5 deselected / 325 selected 195s 196s notebook/auth/tests/test_login.py EE [ 0%] 197s notebook/auth/tests/test_security.py .... [ 1%] 197s notebook/bundler/tests/test_bundler_api.py EEEEE [ 3%] 198s notebook/bundler/tests/test_bundler_tools.py ............. [ 7%] 198s notebook/bundler/tests/test_bundlerextension.py ... [ 8%] 198s notebook/nbconvert/tests/test_nbconvert_handlers.py ssssss [ 10%] 199s notebook/services/api/tests/test_api.py EEE [ 11%] 199s notebook/services/config/tests/test_config_api.py EEE [ 12%] 201s notebook/services/contents/tests/test_contents_api.py EsEEEEEEEEEEssEEsE [ 17%] 212s EEEEEEEEEEEEEEEEEEEEEEEEEsEEEEEEEEEEEssEEsEEEEEEEEEEEEEEEEEEEEEEEEE [ 38%] 212s notebook/services/contents/tests/test_fileio.py ... [ 39%] 212s notebook/services/contents/tests/test_largefilemanager.py . [ 39%] 212s notebook/services/contents/tests/test_manager.py .....s........ss....... [ 46%] 213s ...ss........ [ 50%] 215s notebook/services/kernels/tests/test_kernels_api.py EEEEEEEEEEEE [ 54%] 216s notebook/services/kernelspecs/tests/test_kernelspecs_api.py EEEEEEE [ 56%] 216s notebook/services/nbconvert/tests/test_nbconvert_api.py E [ 56%] 218s notebook/services/sessions/tests/test_sessionmanager.py FFFFFFFFF [ 59%] 221s notebook/services/sessions/tests/test_sessions_api.py EEEEEEEEEEEEEEEEEE [ 64%] 222s EEEE [ 66%] 223s notebook/terminal/tests/test_terminals_api.py EEEEEEEE [ 68%] 223s notebook/tests/test_config_manager.py . [ 68%] 224s notebook/tests/test_files.py EEEEE [ 70%] 225s notebook/tests/test_gateway.py EEEEEE [ 72%] 225s notebook/tests/test_i18n.py . [ 72%] 225s notebook/tests/test_log.py . [ 72%] 226s notebook/tests/test_nbextensions.py ................................... [ 83%] 230s notebook/tests/test_notebookapp.py FFFFFFFFF........F.EEEEEEE [ 91%] 230s notebook/tests/test_paths.py ..E [ 92%] 230s notebook/tests/test_serialize.py .. [ 93%] 231s notebook/tests/test_serverextensions.py ...FF [ 94%] 231s notebook/tests/test_traittypes.py ........... [ 98%] 232s notebook/tests/test_utils.py F...s [ 99%] 232s notebook/tree/tests/test_tree_handler.py E [100%] 232s 232s ==================================== ERRORS ==================================== 232s __________________ ERROR at setup of LoginTest.test_next_bad ___________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ___________________ ERROR at setup of LoginTest.test_next_ok ___________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s __________ ERROR at setup of BundleAPITest.test_bundler_import_error ___________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 232s teardown will clean it up in the end.""" 232s > super().setup_class() 232s 232s notebook/bundler/tests/test_bundler_api.py:27: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:198: in setup_class 232s cls.wait_until_alive() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s _____________ ERROR at setup of BundleAPITest.test_bundler_invoke ______________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 232s teardown will clean it up in the end.""" 232s > super().setup_class() 232s 232s notebook/bundler/tests/test_bundler_api.py:27: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:198: in setup_class 232s cls.wait_until_alive() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ___________ ERROR at setup of BundleAPITest.test_bundler_not_enabled ___________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 232s teardown will clean it up in the end.""" 232s > super().setup_class() 232s 232s notebook/bundler/tests/test_bundler_api.py:27: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:198: in setup_class 232s cls.wait_until_alive() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ___________ ERROR at setup of BundleAPITest.test_missing_bundler_arg ___________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 232s teardown will clean it up in the end.""" 232s > super().setup_class() 232s 232s notebook/bundler/tests/test_bundler_api.py:27: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:198: in setup_class 232s cls.wait_until_alive() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ___________ ERROR at setup of BundleAPITest.test_notebook_not_found ____________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 232s teardown will clean it up in the end.""" 232s > super().setup_class() 232s 232s notebook/bundler/tests/test_bundler_api.py:27: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:198: in setup_class 232s cls.wait_until_alive() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ___________________ ERROR at setup of APITest.test_get_spec ____________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s __________________ ERROR at setup of APITest.test_get_status ___________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s _______________ ERROR at setup of APITest.test_no_track_activity _______________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ____________ ERROR at setup of APITest.test_create_retrieve_config _____________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s __________________ ERROR at setup of APITest.test_get_unknown __________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ____________________ ERROR at setup of APITest.test_modify _____________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s __________________ ERROR at setup of APITest.test_checkpoints __________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ___________ ERROR at setup of APITest.test_checkpoints_separate_root ___________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s _____________________ ERROR at setup of APITest.test_copy ______________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ________________ ERROR at setup of APITest.test_copy_400_hidden ________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ___________________ ERROR at setup of APITest.test_copy_copy ___________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s _________________ ERROR at setup of APITest.test_copy_dir_400 __________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ___________________ ERROR at setup of APITest.test_copy_path ___________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s _________________ ERROR at setup of APITest.test_copy_put_400 __________________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 232s elif error and self._is_read_error(error): 232s # Read retry? 232s if read is False or method is None or not self._is_method_retryable(method): 232s raise reraise(type(error), error, _stacktrace) 232s elif read is not None: 232s read -= 1 232s 232s elif error: 232s # Other retry? 232s if other is not None: 232s other -= 1 232s 232s elif response and response.get_redirect_location(): 232s # Redirect retry? 232s if redirect is not None: 232s redirect -= 1 232s cause = "too many redirects" 232s response_redirect_location = response.get_redirect_location() 232s if response_redirect_location: 232s redirect_location = response_redirect_location 232s status = response.status 232s 232s else: 232s # Incrementing because of a server error like a 500 in 232s # status_forcelist and the given method is in the allowed_methods 232s cause = ResponseError.GENERIC_ERROR 232s if response and response.status: 232s if status_count is not None: 232s status_count -= 1 232s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 232s status = response.status 232s 232s history = self.history + ( 232s RequestHistory(method, url, error, status, redirect_location), 232s ) 232s 232s new_retry = self.new( 232s total=total, 232s connect=connect, 232s read=read, 232s redirect=redirect, 232s status=status_count, 232s other=other, 232s history=history, 232s ) 232s 232s if new_retry.is_exhausted(): 232s reason = error or ResponseError(cause) 232s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 232s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 232s 232s During handling of the above exception, another exception occurred: 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s > cls.fetch_url(url) 232s 232s notebook/tests/launchnotebook.py:53: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s notebook/tests/launchnotebook.py:82: in fetch_url 232s return requests.get(url) 232s /usr/lib/python3/dist-packages/requests/api.py:73: in get 232s return request("get", url, params=params, **kwargs) 232s /usr/lib/python3/dist-packages/requests/api.py:59: in request 232s return session.request(method=method, url=url, **kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 232s resp = self.send(prep, **send_kwargs) 232s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 232s r = adapter.send(request, **kwargs) 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s except (ProtocolError, OSError) as err: 232s raise ConnectionError(err, request=request) 232s 232s except MaxRetryError as e: 232s if isinstance(e.reason, ConnectTimeoutError): 232s # TODO: Remove this in 3.0.0: see #2811 232s if not isinstance(e.reason, NewConnectionError): 232s raise ConnectTimeout(e, request=request) 232s 232s if isinstance(e.reason, ResponseError): 232s raise RetryError(e, request=request) 232s 232s if isinstance(e.reason, _ProxyError): 232s raise ProxyError(e, request=request) 232s 232s if isinstance(e.reason, _SSLError): 232s # This branch is for urllib3 v1.22 and later. 232s raise SSLError(e, request=request) 232s 232s > raise ConnectionError(e, request=request) 232s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s cls = 232s 232s @classmethod 232s def setup_class(cls): 232s cls.tmp_dir = TemporaryDirectory() 232s def tmp(*parts): 232s path = os.path.join(cls.tmp_dir.name, *parts) 232s try: 232s os.makedirs(path) 232s except OSError as e: 232s if e.errno != errno.EEXIST: 232s raise 232s return path 232s 232s cls.home_dir = tmp('home') 232s data_dir = cls.data_dir = tmp('data') 232s config_dir = cls.config_dir = tmp('config') 232s runtime_dir = cls.runtime_dir = tmp('runtime') 232s cls.notebook_dir = tmp('notebooks') 232s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 232s cls.env_patch.start() 232s # Patch systemwide & user-wide data & config directories, to isolate 232s # the tests from oddities of the local setup. But leave Python env 232s # locations alone, so data files for e.g. nbconvert are accessible. 232s # If this isolation isn't sufficient, you may need to run the tests in 232s # a virtualenv or conda env. 232s cls.path_patch = patch.multiple( 232s jupyter_core.paths, 232s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 232s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 232s ) 232s cls.path_patch.start() 232s 232s config = cls.config or Config() 232s config.NotebookNotary.db_file = ':memory:' 232s 232s cls.token = hexlify(os.urandom(4)).decode('ascii') 232s 232s started = Event() 232s def start_thread(): 232s try: 232s bind_args = cls.get_bind_args() 232s app = cls.notebook = NotebookApp( 232s port_retries=0, 232s open_browser=False, 232s config_dir=cls.config_dir, 232s data_dir=cls.data_dir, 232s runtime_dir=cls.runtime_dir, 232s notebook_dir=cls.notebook_dir, 232s base_url=cls.url_prefix, 232s config=config, 232s allow_root=True, 232s token=cls.token, 232s **bind_args 232s ) 232s if "asyncio" in sys.modules: 232s app._init_asyncio_patch() 232s import asyncio 232s 232s asyncio.set_event_loop(asyncio.new_event_loop()) 232s # Patch the current loop in order to match production 232s # behavior 232s import nest_asyncio 232s 232s nest_asyncio.apply() 232s # don't register signal handler during tests 232s app.init_signal = lambda : None 232s # clear log handlers and propagate to root for nose to capture it 232s # needs to be redone after initialize, which reconfigures logging 232s app.log.propagate = True 232s app.log.handlers = [] 232s app.initialize(argv=cls.get_argv()) 232s app.log.propagate = True 232s app.log.handlers = [] 232s loop = IOLoop.current() 232s loop.add_callback(started.set) 232s app.start() 232s finally: 232s # set the event, so failure to start doesn't cause a hang 232s started.set() 232s app.session_manager.close() 232s cls.notebook_thread = Thread(target=start_thread) 232s cls.notebook_thread.daemon = True 232s cls.notebook_thread.start() 232s started.wait() 232s > cls.wait_until_alive() 232s 232s notebook/tests/launchnotebook.py:198: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s cls = 232s 232s @classmethod 232s def wait_until_alive(cls): 232s """Wait for the server to be alive""" 232s url = cls.base_url() + 'api/contents' 232s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 232s try: 232s cls.fetch_url(url) 232s except ModuleNotFoundError as error: 232s # Errors that should be immediately thrown back to caller 232s raise error 232s except Exception as e: 232s if not cls.notebook_thread.is_alive(): 232s > raise RuntimeError("The notebook server failed to start") from e 232s E RuntimeError: The notebook server failed to start 232s 232s notebook/tests/launchnotebook.py:59: RuntimeError 232s ______________ ERROR at setup of APITest.test_copy_put_400_hidden ______________ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s > sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 232s raise err 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s address = ('localhost', 12341), timeout = None, source_address = None 232s socket_options = [(6, 1, 1)] 232s 232s def create_connection( 232s address: tuple[str, int], 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s source_address: tuple[str, int] | None = None, 232s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 232s ) -> socket.socket: 232s """Connect to *address* and return the socket object. 232s 232s Convenience function. Connect to *address* (a 2-tuple ``(host, 232s port)``) and return the socket object. Passing the optional 232s *timeout* parameter will set the timeout on the socket instance 232s before attempting to connect. If no *timeout* is supplied, the 232s global default timeout setting returned by :func:`socket.getdefaulttimeout` 232s is used. If *source_address* is set it must be a tuple of (host, port) 232s for the socket to bind as a source address before making the connection. 232s An host of '' or port 0 tells the OS to use the default. 232s """ 232s 232s host, port = address 232s if host.startswith("["): 232s host = host.strip("[]") 232s err = None 232s 232s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 232s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 232s # The original create_connection function always returns all records. 232s family = allowed_gai_family() 232s 232s try: 232s host.encode("idna") 232s except UnicodeError: 232s raise LocationParseError(f"'{host}', label empty or too long") from None 232s 232s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 232s af, socktype, proto, canonname, sa = res 232s sock = None 232s try: 232s sock = socket.socket(af, socktype, proto) 232s 232s # If provided, set socket level options before connecting. 232s _set_socket_options(sock, socket_options) 232s 232s if timeout is not _DEFAULT_TIMEOUT: 232s sock.settimeout(timeout) 232s if source_address: 232s sock.bind(source_address) 232s > sock.connect(sa) 232s E ConnectionRefusedError: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s method = 'GET', url = '/a%40b/api/contents', body = None 232s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 232s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s redirect = False, assert_same_host = False 232s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 232s release_conn = False, chunked = False, body_pos = None, preload_content = False 232s decode_content = False, response_kw = {} 232s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 232s destination_scheme = None, conn = None, release_this_conn = True 232s http_tunnel_required = False, err = None, clean_exit = False 232s 232s def urlopen( # type: ignore[override] 232s self, 232s method: str, 232s url: str, 232s body: _TYPE_BODY | None = None, 232s headers: typing.Mapping[str, str] | None = None, 232s retries: Retry | bool | int | None = None, 232s redirect: bool = True, 232s assert_same_host: bool = True, 232s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 232s pool_timeout: int | None = None, 232s release_conn: bool | None = None, 232s chunked: bool = False, 232s body_pos: _TYPE_BODY_POSITION | None = None, 232s preload_content: bool = True, 232s decode_content: bool = True, 232s **response_kw: typing.Any, 232s ) -> BaseHTTPResponse: 232s """ 232s Get a connection from the pool and perform an HTTP request. This is the 232s lowest level call for making a request, so you'll need to specify all 232s the raw details. 232s 232s .. note:: 232s 232s More commonly, it's appropriate to use a convenience method 232s such as :meth:`request`. 232s 232s .. note:: 232s 232s `release_conn` will only behave as expected if 232s `preload_content=False` because we want to make 232s `preload_content=False` the default behaviour someday soon without 232s breaking backwards compatibility. 232s 232s :param method: 232s HTTP request method (such as GET, POST, PUT, etc.) 232s 232s :param url: 232s The URL to perform the request on. 232s 232s :param body: 232s Data to send in the request body, either :class:`str`, :class:`bytes`, 232s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 232s 232s :param headers: 232s Dictionary of custom headers to send, such as User-Agent, 232s If-None-Match, etc. If None, pool headers are used. If provided, 232s these headers completely replace any pool-specific headers. 232s 232s :param retries: 232s Configure the number of retries to allow before raising a 232s :class:`~urllib3.exceptions.MaxRetryError` exception. 232s 232s Pass ``None`` to retry until you receive a response. Pass a 232s :class:`~urllib3.util.retry.Retry` object for fine-grained control 232s over different types of retries. 232s Pass an integer number to retry connection errors that many times, 232s but no other types of errors. Pass zero to never retry. 232s 232s If ``False``, then retries are disabled and any exception is raised 232s immediately. Also, instead of raising a MaxRetryError on redirects, 232s the redirect response will be returned. 232s 232s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 232s 232s :param redirect: 232s If True, automatically handle redirects (status codes 301, 302, 232s 303, 307, 308). Each redirect counts as a retry. Disabling retries 232s will disable redirect, too. 232s 232s :param assert_same_host: 232s If ``True``, will make sure that the host of the pool requests is 232s consistent else will raise HostChangedError. When ``False``, you can 232s use the pool on an HTTP proxy and request foreign hosts. 232s 232s :param timeout: 232s If specified, overrides the default timeout for this one 232s request. It may be a float (in seconds) or an instance of 232s :class:`urllib3.util.Timeout`. 232s 232s :param pool_timeout: 232s If set and the pool is set to block=True, then this method will 232s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 232s connection is available within the time period. 232s 232s :param bool preload_content: 232s If True, the response's body will be preloaded into memory. 232s 232s :param bool decode_content: 232s If True, will attempt to decode the body based on the 232s 'content-encoding' header. 232s 232s :param release_conn: 232s If False, then the urlopen call will not release the connection 232s back into the pool once a response is received (but will release if 232s you read the entire contents of the response such as when 232s `preload_content=True`). This is useful if you're not preloading 232s the response's content immediately. You will need to call 232s ``r.release_conn()`` on the response ``r`` to return the connection 232s back into the pool. If None, it takes the value of ``preload_content`` 232s which defaults to ``True``. 232s 232s :param bool chunked: 232s If True, urllib3 will send the body using chunked transfer 232s encoding. Otherwise, urllib3 will send the body using the standard 232s content-length form. Defaults to False. 232s 232s :param int body_pos: 232s Position to seek to in file-like body in the event of a retry or 232s redirect. Typically this won't need to be set because urllib3 will 232s auto-populate the value when needed. 232s """ 232s parsed_url = parse_url(url) 232s destination_scheme = parsed_url.scheme 232s 232s if headers is None: 232s headers = self.headers 232s 232s if not isinstance(retries, Retry): 232s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 232s 232s if release_conn is None: 232s release_conn = preload_content 232s 232s # Check host 232s if assert_same_host and not self.is_same_host(url): 232s raise HostChangedError(self, url, retries) 232s 232s # Ensure that the URL we're connecting to is properly encoded 232s if url.startswith("/"): 232s url = to_str(_encode_target(url)) 232s else: 232s url = to_str(parsed_url.url) 232s 232s conn = None 232s 232s # Track whether `conn` needs to be released before 232s # returning/raising/recursing. Update this variable if necessary, and 232s # leave `release_conn` constant throughout the function. That way, if 232s # the function recurses, the original value of `release_conn` will be 232s # passed down into the recursive call, and its value will be respected. 232s # 232s # See issue #651 [1] for details. 232s # 232s # [1] 232s release_this_conn = release_conn 232s 232s http_tunnel_required = connection_requires_http_tunnel( 232s self.proxy, self.proxy_config, destination_scheme 232s ) 232s 232s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 232s # have to copy the headers dict so we can safely change it without those 232s # changes being reflected in anyone else's copy. 232s if not http_tunnel_required: 232s headers = headers.copy() # type: ignore[attr-defined] 232s headers.update(self.proxy_headers) # type: ignore[union-attr] 232s 232s # Must keep the exception bound to a separate variable or else Python 3 232s # complains about UnboundLocalError. 232s err = None 232s 232s # Keep track of whether we cleanly exited the except block. This 232s # ensures we do proper cleanup in finally. 232s clean_exit = False 232s 232s # Rewind body position, if needed. Record current position 232s # for future rewinds in the event of a redirect/retry. 232s body_pos = set_file_position(body, body_pos) 232s 232s try: 232s # Request a connection from the queue. 232s timeout_obj = self._get_timeout(timeout) 232s conn = self._get_conn(timeout=pool_timeout) 232s 232s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 232s 232s # Is this a closed/new connection that requires CONNECT tunnelling? 232s if self.proxy is not None and http_tunnel_required and conn.is_closed: 232s try: 232s self._prepare_proxy(conn) 232s except (BaseSSLError, OSError, SocketTimeout) as e: 232s self._raise_timeout( 232s err=e, url=self.proxy.url, timeout_value=conn.timeout 232s ) 232s raise 232s 232s # If we're going to release the connection in ``finally:``, then 232s # the response doesn't need to know about the connection. Otherwise 232s # it will also try to release it and we'll have a double-release 232s # mess. 232s response_conn = conn if not release_conn else None 232s 232s # Make the request on the HTTPConnection object 232s > response = self._make_request( 232s conn, 232s method, 232s url, 232s timeout=timeout_obj, 232s body=body, 232s headers=headers, 232s chunked=chunked, 232s retries=retries, 232s response_conn=response_conn, 232s preload_content=preload_content, 232s decode_content=decode_content, 232s **response_kw, 232s ) 232s 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 232s conn.request( 232s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 232s self.endheaders() 232s /usr/lib/python3.12/http/client.py:1331: in endheaders 232s self._send_output(message_body, encode_chunked=encode_chunked) 232s /usr/lib/python3.12/http/client.py:1091: in _send_output 232s self.send(msg) 232s /usr/lib/python3.12/http/client.py:1035: in send 232s self.connect() 232s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 232s self.sock = self._new_conn() 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = 232s 232s def _new_conn(self) -> socket.socket: 232s """Establish a socket connection and set nodelay settings on it. 232s 232s :return: New socket connection. 232s """ 232s try: 232s sock = connection.create_connection( 232s (self._dns_host, self.port), 232s self.timeout, 232s source_address=self.source_address, 232s socket_options=self.socket_options, 232s ) 232s except socket.gaierror as e: 232s raise NameResolutionError(self.host, self, e) from e 232s except SocketTimeout as e: 232s raise ConnectTimeoutError( 232s self, 232s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 232s ) from e 232s 232s except OSError as e: 232s > raise NewConnectionError( 232s self, f"Failed to establish a new connection: {e}" 232s ) from e 232s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 232s 232s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 232s 232s The above exception was the direct cause of the following exception: 232s 232s self = 232s request = , stream = False 232s timeout = Timeout(connect=None, read=None, total=None), verify = True 232s cert = None, proxies = OrderedDict() 232s 232s def send( 232s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 232s ): 232s """Sends PreparedRequest object. Returns Response object. 232s 232s :param request: The :class:`PreparedRequest ` being sent. 232s :param stream: (optional) Whether to stream the request content. 232s :param timeout: (optional) How long to wait for the server to send 232s data before giving up, as a float, or a :ref:`(connect timeout, 232s read timeout) ` tuple. 232s :type timeout: float or tuple or urllib3 Timeout object 232s :param verify: (optional) Either a boolean, in which case it controls whether 232s we verify the server's TLS certificate, or a string, in which case it 232s must be a path to a CA bundle to use 232s :param cert: (optional) Any user-provided SSL certificate to be trusted. 232s :param proxies: (optional) The proxies dictionary to apply to the request. 232s :rtype: requests.Response 232s """ 232s 232s try: 232s conn = self.get_connection(request.url, proxies) 232s except LocationValueError as e: 232s raise InvalidURL(e, request=request) 232s 232s self.cert_verify(conn, request.url, verify, cert) 232s url = self.request_url(request, proxies) 232s self.add_headers( 232s request, 232s stream=stream, 232s timeout=timeout, 232s verify=verify, 232s cert=cert, 232s proxies=proxies, 232s ) 232s 232s chunked = not (request.body is None or "Content-Length" in request.headers) 232s 232s if isinstance(timeout, tuple): 232s try: 232s connect, read = timeout 232s timeout = TimeoutSauce(connect=connect, read=read) 232s except ValueError: 232s raise ValueError( 232s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 232s f"or a single float to set both timeouts to the same value." 232s ) 232s elif isinstance(timeout, TimeoutSauce): 232s pass 232s else: 232s timeout = TimeoutSauce(connect=timeout, read=timeout) 232s 232s try: 232s > resp = conn.urlopen( 232s method=request.method, 232s url=url, 232s body=request.body, 232s headers=request.headers, 232s redirect=False, 232s assert_same_host=False, 232s preload_content=False, 232s decode_content=False, 232s retries=self.max_retries, 232s timeout=timeout, 232s chunked=chunked, 232s ) 232s 232s /usr/lib/python3/dist-packages/requests/adapters.py:486: 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 232s retries = retries.increment( 232s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 232s 232s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 232s method = 'GET', url = '/a%40b/api/contents', response = None 232s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 232s _pool = 232s _stacktrace = 232s 232s def increment( 232s self, 232s method: str | None = None, 232s url: str | None = None, 232s response: BaseHTTPResponse | None = None, 232s error: Exception | None = None, 232s _pool: ConnectionPool | None = None, 232s _stacktrace: TracebackType | None = None, 232s ) -> Retry: 232s """Return a new Retry object with incremented retry counters. 232s 232s :param response: A response object, or None, if the server did not 232s return a response. 232s :type response: :class:`~urllib3.response.BaseHTTPResponse` 232s :param Exception error: An error encountered during the request, or 232s None if the response was received successfully. 232s 232s :return: A new ``Retry`` object. 232s """ 232s if self.total is False and error: 232s # Disabled, indicate to re-raise the error. 232s raise reraise(type(error), error, _stacktrace) 232s 232s total = self.total 232s if total is not None: 232s total -= 1 232s 232s connect = self.connect 232s read = self.read 232s redirect = self.redirect 232s status_count = self.status 232s other = self.other 232s cause = "unknown" 232s status = None 232s redirect_location = None 232s 232s if error and self._is_connection_error(error): 232s # Connect retry? 232s if connect is False: 232s raise reraise(type(error), error, _stacktrace) 232s elif connect is not None: 232s connect -= 1 232s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________________ ERROR at setup of APITest.test_create_untitled ________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ______________ ERROR at setup of APITest.test_create_untitled_txt ______________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _______________ ERROR at setup of APITest.test_delete_hidden_dir _______________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ______________ ERROR at setup of APITest.test_delete_hidden_file _______________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _______________ ERROR at setup of APITest.test_file_checkpoints ________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________________ ERROR at setup of APITest.test_get_404_hidden _________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _________________ ERROR at setup of APITest.test_get_bad_type __________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ___________ ERROR at setup of APITest.test_get_binary_file_contents ____________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ___________ ERROR at setup of APITest.test_get_contents_no_such_file ___________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ______________ ERROR at setup of APITest.test_get_dir_no_content _______________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________________ ERROR at setup of APITest.test_get_nb_contents ________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________________ ERROR at setup of APITest.test_get_nb_invalid _________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _______________ ERROR at setup of APITest.test_get_nb_no_content _______________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ____________ ERROR at setup of APITest.test_get_text_file_contents _____________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ___________________ ERROR at setup of APITest.test_list_dirs ___________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _____________ ERROR at setup of APITest.test_list_nonexistant_dir ______________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________________ ERROR at setup of APITest.test_list_notebooks _________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _____________________ ERROR at setup of APITest.test_mkdir _____________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _______________ ERROR at setup of APITest.test_mkdir_hidden_400 ________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________________ ERROR at setup of APITest.test_mkdir_untitled _________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ____________________ ERROR at setup of APITest.test_rename _____________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _______________ ERROR at setup of APITest.test_rename_400_hidden _______________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________________ ERROR at setup of APITest.test_rename_existing ________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _____________________ ERROR at setup of APITest.test_save ______________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ____________________ ERROR at setup of APITest.test_upload _____________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s __________________ ERROR at setup of APITest.test_upload_b64 ___________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s __________________ ERROR at setup of APITest.test_upload_txt ___________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _______________ ERROR at setup of APITest.test_upload_txt_hidden _______________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ___________________ ERROR at setup of APITest.test_upload_v2 ___________________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _______ ERROR at setup of GenericFileCheckpointsAPITest.test_checkpoints _______ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _ ERROR at setup of GenericFileCheckpointsAPITest.test_checkpoints_separate_root _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s __ ERROR at setup of GenericFileCheckpointsAPITest.test_config_did_something ___ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s __________ ERROR at setup of GenericFileCheckpointsAPITest.test_copy ___________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_400_hidden _____ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_copy ________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ______ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_dir_400 _______ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_path ________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ______ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_put_400 _______ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_put_400_hidden ___ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_create_untitled _____ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_create_untitled_txt ___ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_delete_hidden_dir ____ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_delete_hidden_file ____ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_file_checkpoints _____ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_404_hidden ______ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ______ ERROR at setup of GenericFileCheckpointsAPITest.test_get_bad_type _______ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _ ERROR at setup of GenericFileCheckpointsAPITest.test_get_binary_file_contents _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _ ERROR at setup of GenericFileCheckpointsAPITest.test_get_contents_no_such_file _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_get_dir_no_content ____ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_nb_contents _____ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_nb_invalid ______ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_nb_no_content ____ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s _ ERROR at setup of GenericFileCheckpointsAPITest.test_get_text_file_contents __ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_list_dirs ________ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 233s headers = self.headers 233s 233s if not isinstance(retries, Retry): 233s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 233s 233s if release_conn is None: 233s release_conn = preload_content 233s 233s # Check host 233s if assert_same_host and not self.is_same_host(url): 233s raise HostChangedError(self, url, retries) 233s 233s # Ensure that the URL we're connecting to is properly encoded 233s if url.startswith("/"): 233s url = to_str(_encode_target(url)) 233s else: 233s url = to_str(parsed_url.url) 233s 233s conn = None 233s 233s # Track whether `conn` needs to be released before 233s # returning/raising/recursing. Update this variable if necessary, and 233s # leave `release_conn` constant throughout the function. That way, if 233s # the function recurses, the original value of `release_conn` will be 233s # passed down into the recursive call, and its value will be respected. 233s # 233s # See issue #651 [1] for details. 233s # 233s # [1] 233s release_this_conn = release_conn 233s 233s http_tunnel_required = connection_requires_http_tunnel( 233s self.proxy, self.proxy_config, destination_scheme 233s ) 233s 233s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 233s # have to copy the headers dict so we can safely change it without those 233s # changes being reflected in anyone else's copy. 233s if not http_tunnel_required: 233s headers = headers.copy() # type: ignore[attr-defined] 233s headers.update(self.proxy_headers) # type: ignore[union-attr] 233s 233s # Must keep the exception bound to a separate variable or else Python 3 233s # complains about UnboundLocalError. 233s err = None 233s 233s # Keep track of whether we cleanly exited the except block. This 233s # ensures we do proper cleanup in finally. 233s clean_exit = False 233s 233s # Rewind body position, if needed. Record current position 233s # for future rewinds in the event of a redirect/retry. 233s body_pos = set_file_position(body, body_pos) 233s 233s try: 233s # Request a connection from the queue. 233s timeout_obj = self._get_timeout(timeout) 233s conn = self._get_conn(timeout=pool_timeout) 233s 233s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 233s 233s # Is this a closed/new connection that requires CONNECT tunnelling? 233s if self.proxy is not None and http_tunnel_required and conn.is_closed: 233s try: 233s self._prepare_proxy(conn) 233s except (BaseSSLError, OSError, SocketTimeout) as e: 233s self._raise_timeout( 233s err=e, url=self.proxy.url, timeout_value=conn.timeout 233s ) 233s raise 233s 233s # If we're going to release the connection in ``finally:``, then 233s # the response doesn't need to know about the connection. Otherwise 233s # it will also try to release it and we'll have a double-release 233s # mess. 233s response_conn = conn if not release_conn else None 233s 233s # Make the request on the HTTPConnection object 233s > response = self._make_request( 233s conn, 233s method, 233s url, 233s timeout=timeout_obj, 233s body=body, 233s headers=headers, 233s chunked=chunked, 233s retries=retries, 233s response_conn=response_conn, 233s preload_content=preload_content, 233s decode_content=decode_content, 233s **response_kw, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 233s conn.request( 233s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 233s self.endheaders() 233s /usr/lib/python3.12/http/client.py:1331: in endheaders 233s self._send_output(message_body, encode_chunked=encode_chunked) 233s /usr/lib/python3.12/http/client.py:1091: in _send_output 233s self.send(msg) 233s /usr/lib/python3.12/http/client.py:1035: in send 233s self.connect() 233s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 233s self.sock = self._new_conn() 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s except socket.gaierror as e: 233s raise NameResolutionError(self.host, self, e) from e 233s except SocketTimeout as e: 233s raise ConnectTimeoutError( 233s self, 233s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 233s ) from e 233s 233s except OSError as e: 233s > raise NewConnectionError( 233s self, f"Failed to establish a new connection: {e}" 233s ) from e 233s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s > resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:486: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 233s retries = retries.increment( 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s method = 'GET', url = '/a%40b/api/contents', response = None 233s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 233s _pool = 233s _stacktrace = 233s 233s def increment( 233s self, 233s method: str | None = None, 233s url: str | None = None, 233s response: BaseHTTPResponse | None = None, 233s error: Exception | None = None, 233s _pool: ConnectionPool | None = None, 233s _stacktrace: TracebackType | None = None, 233s ) -> Retry: 233s """Return a new Retry object with incremented retry counters. 233s 233s :param response: A response object, or None, if the server did not 233s return a response. 233s :type response: :class:`~urllib3.response.BaseHTTPResponse` 233s :param Exception error: An error encountered during the request, or 233s None if the response was received successfully. 233s 233s :return: A new ``Retry`` object. 233s """ 233s if self.total is False and error: 233s # Disabled, indicate to re-raise the error. 233s raise reraise(type(error), error, _stacktrace) 233s 233s total = self.total 233s if total is not None: 233s total -= 1 233s 233s connect = self.connect 233s read = self.read 233s redirect = self.redirect 233s status_count = self.status 233s other = self.other 233s cause = "unknown" 233s status = None 233s redirect_location = None 233s 233s if error and self._is_connection_error(error): 233s # Connect retry? 233s if connect is False: 233s raise reraise(type(error), error, _stacktrace) 233s elif connect is not None: 233s connect -= 1 233s 233s elif error and self._is_read_error(error): 233s # Read retry? 233s if read is False or method is None or not self._is_method_retryable(method): 233s raise reraise(type(error), error, _stacktrace) 233s elif read is not None: 233s read -= 1 233s 233s elif error: 233s # Other retry? 233s if other is not None: 233s other -= 1 233s 233s elif response and response.get_redirect_location(): 233s # Redirect retry? 233s if redirect is not None: 233s redirect -= 1 233s cause = "too many redirects" 233s response_redirect_location = response.get_redirect_location() 233s if response_redirect_location: 233s redirect_location = response_redirect_location 233s status = response.status 233s 233s else: 233s # Incrementing because of a server error like a 500 in 233s # status_forcelist and the given method is in the allowed_methods 233s cause = ResponseError.GENERIC_ERROR 233s if response and response.status: 233s if status_count is not None: 233s status_count -= 1 233s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 233s status = response.status 233s 233s history = self.history + ( 233s RequestHistory(method, url, error, status, redirect_location), 233s ) 233s 233s new_retry = self.new( 233s total=total, 233s connect=connect, 233s read=read, 233s redirect=redirect, 233s status=status_count, 233s other=other, 233s history=history, 233s ) 233s 233s if new_retry.is_exhausted(): 233s reason = error or ResponseError(cause) 233s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 233s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 233s 233s During handling of the above exception, another exception occurred: 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s > cls.fetch_url(url) 233s 233s notebook/tests/launchnotebook.py:53: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s notebook/tests/launchnotebook.py:82: in fetch_url 233s return requests.get(url) 233s /usr/lib/python3/dist-packages/requests/api.py:73: in get 233s return request("get", url, params=params, **kwargs) 233s /usr/lib/python3/dist-packages/requests/api.py:59: in request 233s return session.request(method=method, url=url, **kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 233s resp = self.send(prep, **send_kwargs) 233s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 233s r = adapter.send(request, **kwargs) 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s self = 233s request = , stream = False 233s timeout = Timeout(connect=None, read=None, total=None), verify = True 233s cert = None, proxies = OrderedDict() 233s 233s def send( 233s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 233s ): 233s """Sends PreparedRequest object. Returns Response object. 233s 233s :param request: The :class:`PreparedRequest ` being sent. 233s :param stream: (optional) Whether to stream the request content. 233s :param timeout: (optional) How long to wait for the server to send 233s data before giving up, as a float, or a :ref:`(connect timeout, 233s read timeout) ` tuple. 233s :type timeout: float or tuple or urllib3 Timeout object 233s :param verify: (optional) Either a boolean, in which case it controls whether 233s we verify the server's TLS certificate, or a string, in which case it 233s must be a path to a CA bundle to use 233s :param cert: (optional) Any user-provided SSL certificate to be trusted. 233s :param proxies: (optional) The proxies dictionary to apply to the request. 233s :rtype: requests.Response 233s """ 233s 233s try: 233s conn = self.get_connection(request.url, proxies) 233s except LocationValueError as e: 233s raise InvalidURL(e, request=request) 233s 233s self.cert_verify(conn, request.url, verify, cert) 233s url = self.request_url(request, proxies) 233s self.add_headers( 233s request, 233s stream=stream, 233s timeout=timeout, 233s verify=verify, 233s cert=cert, 233s proxies=proxies, 233s ) 233s 233s chunked = not (request.body is None or "Content-Length" in request.headers) 233s 233s if isinstance(timeout, tuple): 233s try: 233s connect, read = timeout 233s timeout = TimeoutSauce(connect=connect, read=read) 233s except ValueError: 233s raise ValueError( 233s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 233s f"or a single float to set both timeouts to the same value." 233s ) 233s elif isinstance(timeout, TimeoutSauce): 233s pass 233s else: 233s timeout = TimeoutSauce(connect=timeout, read=timeout) 233s 233s try: 233s resp = conn.urlopen( 233s method=request.method, 233s url=url, 233s body=request.body, 233s headers=request.headers, 233s redirect=False, 233s assert_same_host=False, 233s preload_content=False, 233s decode_content=False, 233s retries=self.max_retries, 233s timeout=timeout, 233s chunked=chunked, 233s ) 233s 233s except (ProtocolError, OSError) as err: 233s raise ConnectionError(err, request=request) 233s 233s except MaxRetryError as e: 233s if isinstance(e.reason, ConnectTimeoutError): 233s # TODO: Remove this in 3.0.0: see #2811 233s if not isinstance(e.reason, NewConnectionError): 233s raise ConnectTimeout(e, request=request) 233s 233s if isinstance(e.reason, ResponseError): 233s raise RetryError(e, request=request) 233s 233s if isinstance(e.reason, _ProxyError): 233s raise ProxyError(e, request=request) 233s 233s if isinstance(e.reason, _SSLError): 233s # This branch is for urllib3 v1.22 and later. 233s raise SSLError(e, request=request) 233s 233s > raise ConnectionError(e, request=request) 233s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 233s 233s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 233s 233s The above exception was the direct cause of the following exception: 233s 233s cls = 233s 233s @classmethod 233s def setup_class(cls): 233s cls.tmp_dir = TemporaryDirectory() 233s def tmp(*parts): 233s path = os.path.join(cls.tmp_dir.name, *parts) 233s try: 233s os.makedirs(path) 233s except OSError as e: 233s if e.errno != errno.EEXIST: 233s raise 233s return path 233s 233s cls.home_dir = tmp('home') 233s data_dir = cls.data_dir = tmp('data') 233s config_dir = cls.config_dir = tmp('config') 233s runtime_dir = cls.runtime_dir = tmp('runtime') 233s cls.notebook_dir = tmp('notebooks') 233s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 233s cls.env_patch.start() 233s # Patch systemwide & user-wide data & config directories, to isolate 233s # the tests from oddities of the local setup. But leave Python env 233s # locations alone, so data files for e.g. nbconvert are accessible. 233s # If this isolation isn't sufficient, you may need to run the tests in 233s # a virtualenv or conda env. 233s cls.path_patch = patch.multiple( 233s jupyter_core.paths, 233s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 233s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 233s ) 233s cls.path_patch.start() 233s 233s config = cls.config or Config() 233s config.NotebookNotary.db_file = ':memory:' 233s 233s cls.token = hexlify(os.urandom(4)).decode('ascii') 233s 233s started = Event() 233s def start_thread(): 233s try: 233s bind_args = cls.get_bind_args() 233s app = cls.notebook = NotebookApp( 233s port_retries=0, 233s open_browser=False, 233s config_dir=cls.config_dir, 233s data_dir=cls.data_dir, 233s runtime_dir=cls.runtime_dir, 233s notebook_dir=cls.notebook_dir, 233s base_url=cls.url_prefix, 233s config=config, 233s allow_root=True, 233s token=cls.token, 233s **bind_args 233s ) 233s if "asyncio" in sys.modules: 233s app._init_asyncio_patch() 233s import asyncio 233s 233s asyncio.set_event_loop(asyncio.new_event_loop()) 233s # Patch the current loop in order to match production 233s # behavior 233s import nest_asyncio 233s 233s nest_asyncio.apply() 233s # don't register signal handler during tests 233s app.init_signal = lambda : None 233s # clear log handlers and propagate to root for nose to capture it 233s # needs to be redone after initialize, which reconfigures logging 233s app.log.propagate = True 233s app.log.handlers = [] 233s app.initialize(argv=cls.get_argv()) 233s app.log.propagate = True 233s app.log.handlers = [] 233s loop = IOLoop.current() 233s loop.add_callback(started.set) 233s app.start() 233s finally: 233s # set the event, so failure to start doesn't cause a hang 233s started.set() 233s app.session_manager.close() 233s cls.notebook_thread = Thread(target=start_thread) 233s cls.notebook_thread.daemon = True 233s cls.notebook_thread.start() 233s started.wait() 233s > cls.wait_until_alive() 233s 233s notebook/tests/launchnotebook.py:198: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s cls = 233s 233s @classmethod 233s def wait_until_alive(cls): 233s """Wait for the server to be alive""" 233s url = cls.base_url() + 'api/contents' 233s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 233s try: 233s cls.fetch_url(url) 233s except ModuleNotFoundError as error: 233s # Errors that should be immediately thrown back to caller 233s raise error 233s except Exception as e: 233s if not cls.notebook_thread.is_alive(): 233s > raise RuntimeError("The notebook server failed to start") from e 233s E RuntimeError: The notebook server failed to start 233s 233s notebook/tests/launchnotebook.py:59: RuntimeError 233s __ ERROR at setup of GenericFileCheckpointsAPITest.test_list_nonexistant_dir ___ 233s 233s self = 233s 233s def _new_conn(self) -> socket.socket: 233s """Establish a socket connection and set nodelay settings on it. 233s 233s :return: New socket connection. 233s """ 233s try: 233s > sock = connection.create_connection( 233s (self._dns_host, self.port), 233s self.timeout, 233s source_address=self.source_address, 233s socket_options=self.socket_options, 233s ) 233s 233s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 233s raise err 233s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 233s 233s address = ('localhost', 12341), timeout = None, source_address = None 233s socket_options = [(6, 1, 1)] 233s 233s def create_connection( 233s address: tuple[str, int], 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s source_address: tuple[str, int] | None = None, 233s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 233s ) -> socket.socket: 233s """Connect to *address* and return the socket object. 233s 233s Convenience function. Connect to *address* (a 2-tuple ``(host, 233s port)``) and return the socket object. Passing the optional 233s *timeout* parameter will set the timeout on the socket instance 233s before attempting to connect. If no *timeout* is supplied, the 233s global default timeout setting returned by :func:`socket.getdefaulttimeout` 233s is used. If *source_address* is set it must be a tuple of (host, port) 233s for the socket to bind as a source address before making the connection. 233s An host of '' or port 0 tells the OS to use the default. 233s """ 233s 233s host, port = address 233s if host.startswith("["): 233s host = host.strip("[]") 233s err = None 233s 233s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 233s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 233s # The original create_connection function always returns all records. 233s family = allowed_gai_family() 233s 233s try: 233s host.encode("idna") 233s except UnicodeError: 233s raise LocationParseError(f"'{host}', label empty or too long") from None 233s 233s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 233s af, socktype, proto, canonname, sa = res 233s sock = None 233s try: 233s sock = socket.socket(af, socktype, proto) 233s 233s # If provided, set socket level options before connecting. 233s _set_socket_options(sock, socket_options) 233s 233s if timeout is not _DEFAULT_TIMEOUT: 233s sock.settimeout(timeout) 233s if source_address: 233s sock.bind(source_address) 233s > sock.connect(sa) 233s E ConnectionRefusedError: [Errno 111] Connection refused 233s 233s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 233s 233s The above exception was the direct cause of the following exception: 233s 233s self = 233s method = 'GET', url = '/a%40b/api/contents', body = None 233s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 233s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 233s redirect = False, assert_same_host = False 233s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 233s release_conn = False, chunked = False, body_pos = None, preload_content = False 233s decode_content = False, response_kw = {} 233s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 233s destination_scheme = None, conn = None, release_this_conn = True 233s http_tunnel_required = False, err = None, clean_exit = False 233s 233s def urlopen( # type: ignore[override] 233s self, 233s method: str, 233s url: str, 233s body: _TYPE_BODY | None = None, 233s headers: typing.Mapping[str, str] | None = None, 233s retries: Retry | bool | int | None = None, 233s redirect: bool = True, 233s assert_same_host: bool = True, 233s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 233s pool_timeout: int | None = None, 233s release_conn: bool | None = None, 233s chunked: bool = False, 233s body_pos: _TYPE_BODY_POSITION | None = None, 233s preload_content: bool = True, 233s decode_content: bool = True, 233s **response_kw: typing.Any, 233s ) -> BaseHTTPResponse: 233s """ 233s Get a connection from the pool and perform an HTTP request. This is the 233s lowest level call for making a request, so you'll need to specify all 233s the raw details. 233s 233s .. note:: 233s 233s More commonly, it's appropriate to use a convenience method 233s such as :meth:`request`. 233s 233s .. note:: 233s 233s `release_conn` will only behave as expected if 233s `preload_content=False` because we want to make 233s `preload_content=False` the default behaviour someday soon without 233s breaking backwards compatibility. 233s 233s :param method: 233s HTTP request method (such as GET, POST, PUT, etc.) 233s 233s :param url: 233s The URL to perform the request on. 233s 233s :param body: 233s Data to send in the request body, either :class:`str`, :class:`bytes`, 233s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 233s 233s :param headers: 233s Dictionary of custom headers to send, such as User-Agent, 233s If-None-Match, etc. If None, pool headers are used. If provided, 233s these headers completely replace any pool-specific headers. 233s 233s :param retries: 233s Configure the number of retries to allow before raising a 233s :class:`~urllib3.exceptions.MaxRetryError` exception. 233s 233s Pass ``None`` to retry until you receive a response. Pass a 233s :class:`~urllib3.util.retry.Retry` object for fine-grained control 233s over different types of retries. 233s Pass an integer number to retry connection errors that many times, 233s but no other types of errors. Pass zero to never retry. 233s 233s If ``False``, then retries are disabled and any exception is raised 233s immediately. Also, instead of raising a MaxRetryError on redirects, 233s the redirect response will be returned. 233s 233s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 233s 233s :param redirect: 233s If True, automatically handle redirects (status codes 301, 302, 233s 303, 307, 308). Each redirect counts as a retry. Disabling retries 233s will disable redirect, too. 233s 233s :param assert_same_host: 233s If ``True``, will make sure that the host of the pool requests is 233s consistent else will raise HostChangedError. When ``False``, you can 233s use the pool on an HTTP proxy and request foreign hosts. 233s 233s :param timeout: 233s If specified, overrides the default timeout for this one 233s request. It may be a float (in seconds) or an instance of 233s :class:`urllib3.util.Timeout`. 233s 233s :param pool_timeout: 233s If set and the pool is set to block=True, then this method will 233s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 233s connection is available within the time period. 233s 233s :param bool preload_content: 233s If True, the response's body will be preloaded into memory. 233s 233s :param bool decode_content: 233s If True, will attempt to decode the body based on the 233s 'content-encoding' header. 233s 233s :param release_conn: 233s If False, then the urlopen call will not release the connection 233s back into the pool once a response is received (but will release if 233s you read the entire contents of the response such as when 233s `preload_content=True`). This is useful if you're not preloading 233s the response's content immediately. You will need to call 233s ``r.release_conn()`` on the response ``r`` to return the connection 233s back into the pool. If None, it takes the value of ``preload_content`` 233s which defaults to ``True``. 233s 233s :param bool chunked: 233s If True, urllib3 will send the body using chunked transfer 233s encoding. Otherwise, urllib3 will send the body using the standard 233s content-length form. Defaults to False. 233s 233s :param int body_pos: 233s Position to seek to in file-like body in the event of a retry or 233s redirect. Typically this won't need to be set because urllib3 will 233s auto-populate the value when needed. 233s """ 233s parsed_url = parse_url(url) 233s destination_scheme = parsed_url.scheme 233s 233s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_list_notebooks ______ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s __________ ERROR at setup of GenericFileCheckpointsAPITest.test_mkdir __________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_mkdir_hidden_400 _____ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_mkdir_untitled ______ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of GenericFileCheckpointsAPITest.test_rename __________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_rename_400_hidden ____ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_rename_existing _____ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s __________ ERROR at setup of GenericFileCheckpointsAPITest.test_save ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of GenericFileCheckpointsAPITest.test_upload __________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _______ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_b64 ________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _______ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_txt ________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_txt_hidden ____ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_v2 ________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _______________ ERROR at setup of KernelAPITest.test_connections _______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____________ ERROR at setup of KernelAPITest.test_default_kernel ______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____________ ERROR at setup of KernelAPITest.test_kernel_handler ______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________ ERROR at setup of KernelAPITest.test_main_kernel_handler ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _______________ ERROR at setup of KernelAPITest.test_no_kernels ________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ____________ ERROR at setup of AsyncKernelAPITest.test_connections _____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/kernels/tests/test_kernels_api.py:206: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________ ERROR at setup of AsyncKernelAPITest.test_default_kernel ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/kernels/tests/test_kernels_api.py:206: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________ ERROR at setup of AsyncKernelAPITest.test_kernel_handler ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/kernels/tests/test_kernels_api.py:206: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ________ ERROR at setup of AsyncKernelAPITest.test_main_kernel_handler _________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/kernels/tests/test_kernels_api.py:206: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____________ ERROR at setup of AsyncKernelAPITest.test_no_kernels _____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/kernels/tests/test_kernels_api.py:206: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ________________ ERROR at setup of KernelFilterTest.test_config ________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _______________ ERROR at setup of KernelCullingTest.test_culling _______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________ ERROR at setup of APITest.test_get_kernel_resource_file ____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ________________ ERROR at setup of APITest.test_get_kernelspec _________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____________ ERROR at setup of APITest.test_get_kernelspec_spaces _____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s __________ ERROR at setup of APITest.test_get_nonexistant_kernelspec ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________ ERROR at setup of APITest.test_get_nonexistant_resource ____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _______________ ERROR at setup of APITest.test_list_kernelspecs ________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____________ ERROR at setup of APITest.test_list_kernelspecs_bad ______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________________ ERROR at setup of APITest.test_list_formats __________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________________ ERROR at setup of SessionAPITest.test_create _________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of SessionAPITest.test_create_console_session _________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________ ERROR at setup of SessionAPITest.test_create_deprecated ____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s __________ ERROR at setup of SessionAPITest.test_create_file_session ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of SessionAPITest.test_create_with_kernel_id __________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________________ ERROR at setup of SessionAPITest.test_delete _________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ____________ ERROR at setup of SessionAPITest.test_modify_kernel_id ____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________ ERROR at setup of SessionAPITest.test_modify_kernel_name ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______________ ERROR at setup of SessionAPITest.test_modify_path _______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of SessionAPITest.test_modify_path_deprecated _________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______________ ERROR at setup of SessionAPITest.test_modify_type _______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______________ ERROR at setup of AsyncSessionAPITest.test_create _______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______ ERROR at setup of AsyncSessionAPITest.test_create_console_session _______ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of AsyncSessionAPITest.test_create_deprecated _________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ________ ERROR at setup of AsyncSessionAPITest.test_create_file_session ________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _______ ERROR at setup of AsyncSessionAPITest.test_create_with_kernel_id _______ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______________ ERROR at setup of AsyncSessionAPITest.test_delete _______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of AsyncSessionAPITest.test_modify_kernel_id __________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ________ ERROR at setup of AsyncSessionAPITest.test_modify_kernel_name _________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ____________ ERROR at setup of AsyncSessionAPITest.test_modify_path ____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______ ERROR at setup of AsyncSessionAPITest.test_modify_path_deprecated _______ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ____________ ERROR at setup of AsyncSessionAPITest.test_modify_type ____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 234s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 234s > super().setup_class() 234s 234s notebook/services/sessions/tests/test_sessions_api.py:274: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ____________ ERROR at setup of TerminalAPITest.test_create_terminal ____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ________ ERROR at setup of TerminalAPITest.test_create_terminal_via_get ________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _______ ERROR at setup of TerminalAPITest.test_create_terminal_with_name _______ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____________ ERROR at setup of TerminalAPITest.test_no_terminals ______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________ ERROR at setup of TerminalAPITest.test_terminal_handler ____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of TerminalAPITest.test_terminal_root_handler _________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______________ ERROR at setup of TerminalCullingTest.test_config _______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______________ ERROR at setup of TerminalCullingTest.test_culling ______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______________ ERROR at setup of FilesTest.test_contents_manager _______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s __________________ ERROR at setup of FilesTest.test_download ___________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ________________ ERROR at setup of FilesTest.test_hidden_files _________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____________ ERROR at setup of FilesTest.test_old_files_redirect ______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s __________________ ERROR at setup of FilesTest.test_view_html __________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s __________ ERROR at setup of TestGateway.test_gateway_class_mappings ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s GatewayClient.clear_instance() 234s > super().setup_class() 234s 234s notebook/tests/test_gateway.py:138: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s __________ ERROR at setup of TestGateway.test_gateway_get_kernelspecs __________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s GatewayClient.clear_instance() 234s > super().setup_class() 234s 234s notebook/tests/test_gateway.py:138: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _______ ERROR at setup of TestGateway.test_gateway_get_named_kernelspec ________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s GatewayClient.clear_instance() 234s > super().setup_class() 234s 234s notebook/tests/test_gateway.py:138: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of TestGateway.test_gateway_kernel_lifecycle __________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s GatewayClient.clear_instance() 234s > super().setup_class() 234s 234s notebook/tests/test_gateway.py:138: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______________ ERROR at setup of TestGateway.test_gateway_options ______________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s GatewayClient.clear_instance() 234s > super().setup_class() 234s 234s notebook/tests/test_gateway.py:138: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of TestGateway.test_gateway_session_lifecycle _________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s GatewayClient.clear_instance() 234s > super().setup_class() 234s 234s notebook/tests/test_gateway.py:138: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _________ ERROR at setup of NotebookAppTests.test_list_running_servers _________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________ ERROR at setup of NotebookAppTests.test_log_json_default ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s __________ ERROR at setup of NotebookAppTests.test_validate_log_json ___________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___ ERROR at setup of NotebookUnixSocketTests.test_list_running_sock_servers ___ 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def connect(self): 234s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 234s sock.settimeout(self.timeout) 234s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 234s > sock.connect(socket_path) 234s E FileNotFoundError: [Errno 2] No such file or directory 234s 234s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: FileNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None 234s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:470: in increment 234s raise reraise(type(error), error, _stacktrace) 234s /usr/lib/python3/dist-packages/urllib3/util/util.py:38: in reraise 234s raise value.with_traceback(tb) 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: in urlopen 234s response = self._make_request( 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def connect(self): 234s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 234s sock.settimeout(self.timeout) 234s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 234s > sock.connect(socket_path) 234s E urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 234s 234s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: ProtocolError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:242: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:51: in get 234s return request('get', url, **kwargs) 234s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:46: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None 234s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s > raise ConnectionError(err, request=request) 234s E requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:501: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ______________ ERROR at setup of NotebookUnixSocketTests.test_run ______________ 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def connect(self): 234s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 234s sock.settimeout(self.timeout) 234s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 234s > sock.connect(socket_path) 234s E FileNotFoundError: [Errno 2] No such file or directory 234s 234s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: FileNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None 234s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:470: in increment 234s raise reraise(type(error), error, _stacktrace) 234s /usr/lib/python3/dist-packages/urllib3/util/util.py:38: in reraise 234s raise value.with_traceback(tb) 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: in urlopen 234s response = self._make_request( 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def connect(self): 234s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 234s sock.settimeout(self.timeout) 234s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 234s > sock.connect(socket_path) 234s E urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 234s 234s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: ProtocolError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:242: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:51: in get 234s return request('get', url, **kwargs) 234s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:46: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None 234s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s > raise ConnectionError(err, request=request) 234s E requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:501: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____ ERROR at setup of NotebookAppJSONLoggingTests.test_log_json_enabled ______ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s > super().setup_class() 234s 234s notebook/tests/test_notebookapp.py:212: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s _____ ERROR at setup of NotebookAppJSONLoggingTests.test_validate_log_json _____ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s > super().setup_class() 234s 234s notebook/tests/test_notebookapp.py:212: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:198: in setup_class 234s cls.wait_until_alive() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ____________ ERROR at setup of RedirectTestCase.test_trailing_slash ____________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s ___________________ ERROR at setup of TreeTest.test_redirect ___________________ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s > sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 234s raise err 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s address = ('localhost', 12341), timeout = None, source_address = None 234s socket_options = [(6, 1, 1)] 234s 234s def create_connection( 234s address: tuple[str, int], 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s source_address: tuple[str, int] | None = None, 234s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 234s ) -> socket.socket: 234s """Connect to *address* and return the socket object. 234s 234s Convenience function. Connect to *address* (a 2-tuple ``(host, 234s port)``) and return the socket object. Passing the optional 234s *timeout* parameter will set the timeout on the socket instance 234s before attempting to connect. If no *timeout* is supplied, the 234s global default timeout setting returned by :func:`socket.getdefaulttimeout` 234s is used. If *source_address* is set it must be a tuple of (host, port) 234s for the socket to bind as a source address before making the connection. 234s An host of '' or port 0 tells the OS to use the default. 234s """ 234s 234s host, port = address 234s if host.startswith("["): 234s host = host.strip("[]") 234s err = None 234s 234s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 234s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 234s # The original create_connection function always returns all records. 234s family = allowed_gai_family() 234s 234s try: 234s host.encode("idna") 234s except UnicodeError: 234s raise LocationParseError(f"'{host}', label empty or too long") from None 234s 234s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 234s af, socktype, proto, canonname, sa = res 234s sock = None 234s try: 234s sock = socket.socket(af, socktype, proto) 234s 234s # If provided, set socket level options before connecting. 234s _set_socket_options(sock, socket_options) 234s 234s if timeout is not _DEFAULT_TIMEOUT: 234s sock.settimeout(timeout) 234s if source_address: 234s sock.bind(source_address) 234s > sock.connect(sa) 234s E ConnectionRefusedError: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s method = 'GET', url = '/a%40b/api/contents', body = None 234s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 234s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s redirect = False, assert_same_host = False 234s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 234s release_conn = False, chunked = False, body_pos = None, preload_content = False 234s decode_content = False, response_kw = {} 234s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 234s destination_scheme = None, conn = None, release_this_conn = True 234s http_tunnel_required = False, err = None, clean_exit = False 234s 234s def urlopen( # type: ignore[override] 234s self, 234s method: str, 234s url: str, 234s body: _TYPE_BODY | None = None, 234s headers: typing.Mapping[str, str] | None = None, 234s retries: Retry | bool | int | None = None, 234s redirect: bool = True, 234s assert_same_host: bool = True, 234s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 234s pool_timeout: int | None = None, 234s release_conn: bool | None = None, 234s chunked: bool = False, 234s body_pos: _TYPE_BODY_POSITION | None = None, 234s preload_content: bool = True, 234s decode_content: bool = True, 234s **response_kw: typing.Any, 234s ) -> BaseHTTPResponse: 234s """ 234s Get a connection from the pool and perform an HTTP request. This is the 234s lowest level call for making a request, so you'll need to specify all 234s the raw details. 234s 234s .. note:: 234s 234s More commonly, it's appropriate to use a convenience method 234s such as :meth:`request`. 234s 234s .. note:: 234s 234s `release_conn` will only behave as expected if 234s `preload_content=False` because we want to make 234s `preload_content=False` the default behaviour someday soon without 234s breaking backwards compatibility. 234s 234s :param method: 234s HTTP request method (such as GET, POST, PUT, etc.) 234s 234s :param url: 234s The URL to perform the request on. 234s 234s :param body: 234s Data to send in the request body, either :class:`str`, :class:`bytes`, 234s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 234s 234s :param headers: 234s Dictionary of custom headers to send, such as User-Agent, 234s If-None-Match, etc. If None, pool headers are used. If provided, 234s these headers completely replace any pool-specific headers. 234s 234s :param retries: 234s Configure the number of retries to allow before raising a 234s :class:`~urllib3.exceptions.MaxRetryError` exception. 234s 234s Pass ``None`` to retry until you receive a response. Pass a 234s :class:`~urllib3.util.retry.Retry` object for fine-grained control 234s over different types of retries. 234s Pass an integer number to retry connection errors that many times, 234s but no other types of errors. Pass zero to never retry. 234s 234s If ``False``, then retries are disabled and any exception is raised 234s immediately. Also, instead of raising a MaxRetryError on redirects, 234s the redirect response will be returned. 234s 234s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 234s 234s :param redirect: 234s If True, automatically handle redirects (status codes 301, 302, 234s 303, 307, 308). Each redirect counts as a retry. Disabling retries 234s will disable redirect, too. 234s 234s :param assert_same_host: 234s If ``True``, will make sure that the host of the pool requests is 234s consistent else will raise HostChangedError. When ``False``, you can 234s use the pool on an HTTP proxy and request foreign hosts. 234s 234s :param timeout: 234s If specified, overrides the default timeout for this one 234s request. It may be a float (in seconds) or an instance of 234s :class:`urllib3.util.Timeout`. 234s 234s :param pool_timeout: 234s If set and the pool is set to block=True, then this method will 234s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 234s connection is available within the time period. 234s 234s :param bool preload_content: 234s If True, the response's body will be preloaded into memory. 234s 234s :param bool decode_content: 234s If True, will attempt to decode the body based on the 234s 'content-encoding' header. 234s 234s :param release_conn: 234s If False, then the urlopen call will not release the connection 234s back into the pool once a response is received (but will release if 234s you read the entire contents of the response such as when 234s `preload_content=True`). This is useful if you're not preloading 234s the response's content immediately. You will need to call 234s ``r.release_conn()`` on the response ``r`` to return the connection 234s back into the pool. If None, it takes the value of ``preload_content`` 234s which defaults to ``True``. 234s 234s :param bool chunked: 234s If True, urllib3 will send the body using chunked transfer 234s encoding. Otherwise, urllib3 will send the body using the standard 234s content-length form. Defaults to False. 234s 234s :param int body_pos: 234s Position to seek to in file-like body in the event of a retry or 234s redirect. Typically this won't need to be set because urllib3 will 234s auto-populate the value when needed. 234s """ 234s parsed_url = parse_url(url) 234s destination_scheme = parsed_url.scheme 234s 234s if headers is None: 234s headers = self.headers 234s 234s if not isinstance(retries, Retry): 234s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 234s 234s if release_conn is None: 234s release_conn = preload_content 234s 234s # Check host 234s if assert_same_host and not self.is_same_host(url): 234s raise HostChangedError(self, url, retries) 234s 234s # Ensure that the URL we're connecting to is properly encoded 234s if url.startswith("/"): 234s url = to_str(_encode_target(url)) 234s else: 234s url = to_str(parsed_url.url) 234s 234s conn = None 234s 234s # Track whether `conn` needs to be released before 234s # returning/raising/recursing. Update this variable if necessary, and 234s # leave `release_conn` constant throughout the function. That way, if 234s # the function recurses, the original value of `release_conn` will be 234s # passed down into the recursive call, and its value will be respected. 234s # 234s # See issue #651 [1] for details. 234s # 234s # [1] 234s release_this_conn = release_conn 234s 234s http_tunnel_required = connection_requires_http_tunnel( 234s self.proxy, self.proxy_config, destination_scheme 234s ) 234s 234s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 234s # have to copy the headers dict so we can safely change it without those 234s # changes being reflected in anyone else's copy. 234s if not http_tunnel_required: 234s headers = headers.copy() # type: ignore[attr-defined] 234s headers.update(self.proxy_headers) # type: ignore[union-attr] 234s 234s # Must keep the exception bound to a separate variable or else Python 3 234s # complains about UnboundLocalError. 234s err = None 234s 234s # Keep track of whether we cleanly exited the except block. This 234s # ensures we do proper cleanup in finally. 234s clean_exit = False 234s 234s # Rewind body position, if needed. Record current position 234s # for future rewinds in the event of a redirect/retry. 234s body_pos = set_file_position(body, body_pos) 234s 234s try: 234s # Request a connection from the queue. 234s timeout_obj = self._get_timeout(timeout) 234s conn = self._get_conn(timeout=pool_timeout) 234s 234s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 234s 234s # Is this a closed/new connection that requires CONNECT tunnelling? 234s if self.proxy is not None and http_tunnel_required and conn.is_closed: 234s try: 234s self._prepare_proxy(conn) 234s except (BaseSSLError, OSError, SocketTimeout) as e: 234s self._raise_timeout( 234s err=e, url=self.proxy.url, timeout_value=conn.timeout 234s ) 234s raise 234s 234s # If we're going to release the connection in ``finally:``, then 234s # the response doesn't need to know about the connection. Otherwise 234s # it will also try to release it and we'll have a double-release 234s # mess. 234s response_conn = conn if not release_conn else None 234s 234s # Make the request on the HTTPConnection object 234s > response = self._make_request( 234s conn, 234s method, 234s url, 234s timeout=timeout_obj, 234s body=body, 234s headers=headers, 234s chunked=chunked, 234s retries=retries, 234s response_conn=response_conn, 234s preload_content=preload_content, 234s decode_content=decode_content, 234s **response_kw, 234s ) 234s 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 234s conn.request( 234s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 234s self.endheaders() 234s /usr/lib/python3.12/http/client.py:1331: in endheaders 234s self._send_output(message_body, encode_chunked=encode_chunked) 234s /usr/lib/python3.12/http/client.py:1091: in _send_output 234s self.send(msg) 234s /usr/lib/python3.12/http/client.py:1035: in send 234s self.connect() 234s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 234s self.sock = self._new_conn() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _new_conn(self) -> socket.socket: 234s """Establish a socket connection and set nodelay settings on it. 234s 234s :return: New socket connection. 234s """ 234s try: 234s sock = connection.create_connection( 234s (self._dns_host, self.port), 234s self.timeout, 234s source_address=self.source_address, 234s socket_options=self.socket_options, 234s ) 234s except socket.gaierror as e: 234s raise NameResolutionError(self.host, self, e) from e 234s except SocketTimeout as e: 234s raise ConnectTimeoutError( 234s self, 234s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 234s ) from e 234s 234s except OSError as e: 234s > raise NewConnectionError( 234s self, f"Failed to establish a new connection: {e}" 234s ) from e 234s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 234s 234s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s > resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:486: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 234s retries = retries.increment( 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 234s method = 'GET', url = '/a%40b/api/contents', response = None 234s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 234s _pool = 234s _stacktrace = 234s 234s def increment( 234s self, 234s method: str | None = None, 234s url: str | None = None, 234s response: BaseHTTPResponse | None = None, 234s error: Exception | None = None, 234s _pool: ConnectionPool | None = None, 234s _stacktrace: TracebackType | None = None, 234s ) -> Retry: 234s """Return a new Retry object with incremented retry counters. 234s 234s :param response: A response object, or None, if the server did not 234s return a response. 234s :type response: :class:`~urllib3.response.BaseHTTPResponse` 234s :param Exception error: An error encountered during the request, or 234s None if the response was received successfully. 234s 234s :return: A new ``Retry`` object. 234s """ 234s if self.total is False and error: 234s # Disabled, indicate to re-raise the error. 234s raise reraise(type(error), error, _stacktrace) 234s 234s total = self.total 234s if total is not None: 234s total -= 1 234s 234s connect = self.connect 234s read = self.read 234s redirect = self.redirect 234s status_count = self.status 234s other = self.other 234s cause = "unknown" 234s status = None 234s redirect_location = None 234s 234s if error and self._is_connection_error(error): 234s # Connect retry? 234s if connect is False: 234s raise reraise(type(error), error, _stacktrace) 234s elif connect is not None: 234s connect -= 1 234s 234s elif error and self._is_read_error(error): 234s # Read retry? 234s if read is False or method is None or not self._is_method_retryable(method): 234s raise reraise(type(error), error, _stacktrace) 234s elif read is not None: 234s read -= 1 234s 234s elif error: 234s # Other retry? 234s if other is not None: 234s other -= 1 234s 234s elif response and response.get_redirect_location(): 234s # Redirect retry? 234s if redirect is not None: 234s redirect -= 1 234s cause = "too many redirects" 234s response_redirect_location = response.get_redirect_location() 234s if response_redirect_location: 234s redirect_location = response_redirect_location 234s status = response.status 234s 234s else: 234s # Incrementing because of a server error like a 500 in 234s # status_forcelist and the given method is in the allowed_methods 234s cause = ResponseError.GENERIC_ERROR 234s if response and response.status: 234s if status_count is not None: 234s status_count -= 1 234s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 234s status = response.status 234s 234s history = self.history + ( 234s RequestHistory(method, url, error, status, redirect_location), 234s ) 234s 234s new_retry = self.new( 234s total=total, 234s connect=connect, 234s read=read, 234s redirect=redirect, 234s status=status_count, 234s other=other, 234s history=history, 234s ) 234s 234s if new_retry.is_exhausted(): 234s reason = error or ResponseError(cause) 234s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 234s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 234s 234s During handling of the above exception, another exception occurred: 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s > cls.fetch_url(url) 234s 234s notebook/tests/launchnotebook.py:53: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s notebook/tests/launchnotebook.py:82: in fetch_url 234s return requests.get(url) 234s /usr/lib/python3/dist-packages/requests/api.py:73: in get 234s return request("get", url, params=params, **kwargs) 234s /usr/lib/python3/dist-packages/requests/api.py:59: in request 234s return session.request(method=method, url=url, **kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 234s resp = self.send(prep, **send_kwargs) 234s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 234s r = adapter.send(request, **kwargs) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s request = , stream = False 234s timeout = Timeout(connect=None, read=None, total=None), verify = True 234s cert = None, proxies = OrderedDict() 234s 234s def send( 234s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 234s ): 234s """Sends PreparedRequest object. Returns Response object. 234s 234s :param request: The :class:`PreparedRequest ` being sent. 234s :param stream: (optional) Whether to stream the request content. 234s :param timeout: (optional) How long to wait for the server to send 234s data before giving up, as a float, or a :ref:`(connect timeout, 234s read timeout) ` tuple. 234s :type timeout: float or tuple or urllib3 Timeout object 234s :param verify: (optional) Either a boolean, in which case it controls whether 234s we verify the server's TLS certificate, or a string, in which case it 234s must be a path to a CA bundle to use 234s :param cert: (optional) Any user-provided SSL certificate to be trusted. 234s :param proxies: (optional) The proxies dictionary to apply to the request. 234s :rtype: requests.Response 234s """ 234s 234s try: 234s conn = self.get_connection(request.url, proxies) 234s except LocationValueError as e: 234s raise InvalidURL(e, request=request) 234s 234s self.cert_verify(conn, request.url, verify, cert) 234s url = self.request_url(request, proxies) 234s self.add_headers( 234s request, 234s stream=stream, 234s timeout=timeout, 234s verify=verify, 234s cert=cert, 234s proxies=proxies, 234s ) 234s 234s chunked = not (request.body is None or "Content-Length" in request.headers) 234s 234s if isinstance(timeout, tuple): 234s try: 234s connect, read = timeout 234s timeout = TimeoutSauce(connect=connect, read=read) 234s except ValueError: 234s raise ValueError( 234s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 234s f"or a single float to set both timeouts to the same value." 234s ) 234s elif isinstance(timeout, TimeoutSauce): 234s pass 234s else: 234s timeout = TimeoutSauce(connect=timeout, read=timeout) 234s 234s try: 234s resp = conn.urlopen( 234s method=request.method, 234s url=url, 234s body=request.body, 234s headers=request.headers, 234s redirect=False, 234s assert_same_host=False, 234s preload_content=False, 234s decode_content=False, 234s retries=self.max_retries, 234s timeout=timeout, 234s chunked=chunked, 234s ) 234s 234s except (ProtocolError, OSError) as err: 234s raise ConnectionError(err, request=request) 234s 234s except MaxRetryError as e: 234s if isinstance(e.reason, ConnectTimeoutError): 234s # TODO: Remove this in 3.0.0: see #2811 234s if not isinstance(e.reason, NewConnectionError): 234s raise ConnectTimeout(e, request=request) 234s 234s if isinstance(e.reason, ResponseError): 234s raise RetryError(e, request=request) 234s 234s if isinstance(e.reason, _ProxyError): 234s raise ProxyError(e, request=request) 234s 234s if isinstance(e.reason, _SSLError): 234s # This branch is for urllib3 v1.22 and later. 234s raise SSLError(e, request=request) 234s 234s > raise ConnectionError(e, request=request) 234s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 234s 234s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 234s 234s The above exception was the direct cause of the following exception: 234s 234s cls = 234s 234s @classmethod 234s def setup_class(cls): 234s cls.tmp_dir = TemporaryDirectory() 234s def tmp(*parts): 234s path = os.path.join(cls.tmp_dir.name, *parts) 234s try: 234s os.makedirs(path) 234s except OSError as e: 234s if e.errno != errno.EEXIST: 234s raise 234s return path 234s 234s cls.home_dir = tmp('home') 234s data_dir = cls.data_dir = tmp('data') 234s config_dir = cls.config_dir = tmp('config') 234s runtime_dir = cls.runtime_dir = tmp('runtime') 234s cls.notebook_dir = tmp('notebooks') 234s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 234s cls.env_patch.start() 234s # Patch systemwide & user-wide data & config directories, to isolate 234s # the tests from oddities of the local setup. But leave Python env 234s # locations alone, so data files for e.g. nbconvert are accessible. 234s # If this isolation isn't sufficient, you may need to run the tests in 234s # a virtualenv or conda env. 234s cls.path_patch = patch.multiple( 234s jupyter_core.paths, 234s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 234s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 234s ) 234s cls.path_patch.start() 234s 234s config = cls.config or Config() 234s config.NotebookNotary.db_file = ':memory:' 234s 234s cls.token = hexlify(os.urandom(4)).decode('ascii') 234s 234s started = Event() 234s def start_thread(): 234s try: 234s bind_args = cls.get_bind_args() 234s app = cls.notebook = NotebookApp( 234s port_retries=0, 234s open_browser=False, 234s config_dir=cls.config_dir, 234s data_dir=cls.data_dir, 234s runtime_dir=cls.runtime_dir, 234s notebook_dir=cls.notebook_dir, 234s base_url=cls.url_prefix, 234s config=config, 234s allow_root=True, 234s token=cls.token, 234s **bind_args 234s ) 234s if "asyncio" in sys.modules: 234s app._init_asyncio_patch() 234s import asyncio 234s 234s asyncio.set_event_loop(asyncio.new_event_loop()) 234s # Patch the current loop in order to match production 234s # behavior 234s import nest_asyncio 234s 234s nest_asyncio.apply() 234s # don't register signal handler during tests 234s app.init_signal = lambda : None 234s # clear log handlers and propagate to root for nose to capture it 234s # needs to be redone after initialize, which reconfigures logging 234s app.log.propagate = True 234s app.log.handlers = [] 234s app.initialize(argv=cls.get_argv()) 234s app.log.propagate = True 234s app.log.handlers = [] 234s loop = IOLoop.current() 234s loop.add_callback(started.set) 234s app.start() 234s finally: 234s # set the event, so failure to start doesn't cause a hang 234s started.set() 234s app.session_manager.close() 234s cls.notebook_thread = Thread(target=start_thread) 234s cls.notebook_thread.daemon = True 234s cls.notebook_thread.start() 234s started.wait() 234s > cls.wait_until_alive() 234s 234s notebook/tests/launchnotebook.py:198: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s cls = 234s 234s @classmethod 234s def wait_until_alive(cls): 234s """Wait for the server to be alive""" 234s url = cls.base_url() + 'api/contents' 234s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 234s try: 234s cls.fetch_url(url) 234s except ModuleNotFoundError as error: 234s # Errors that should be immediately thrown back to caller 234s raise error 234s except Exception as e: 234s if not cls.notebook_thread.is_alive(): 234s > raise RuntimeError("The notebook server failed to start") from e 234s E RuntimeError: The notebook server failed to start 234s 234s notebook/tests/launchnotebook.py:59: RuntimeError 234s =================================== FAILURES =================================== 234s __________________ TestSessionManager.test_bad_delete_session __________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:336: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.services.contents.manager.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def setUp(self): 234s > self.sm = SessionManager( 234s kernel_manager=DummyMKM(), 234s contents_manager=ContentsManager(), 234s ) 234s 234s notebook/services/sessions/tests/test_sessionmanager.py:45: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:327: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:339: TypeError 234s ___________________ TestSessionManager.test_bad_get_session ____________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:336: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.services.contents.manager.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def setUp(self): 234s > self.sm = SessionManager( 234s kernel_manager=DummyMKM(), 234s contents_manager=ContentsManager(), 234s ) 234s 234s notebook/services/sessions/tests/test_sessionmanager.py:45: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:327: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:339: TypeError 234s __________________ TestSessionManager.test_bad_update_session __________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:336: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.services.contents.manager.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def setUp(self): 234s > self.sm = SessionManager( 234s kernel_manager=DummyMKM(), 234s contents_manager=ContentsManager(), 234s ) 234s 234s notebook/services/sessions/tests/test_sessionmanager.py:45: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:327: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:339: TypeError 234s ____________________ TestSessionManager.test_delete_session ____________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:336: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.services.contents.manager.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def setUp(self): 234s > self.sm = SessionManager( 234s kernel_manager=DummyMKM(), 234s contents_manager=ContentsManager(), 234s ) 234s 234s notebook/services/sessions/tests/test_sessionmanager.py:45: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:327: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:339: TypeError 234s _____________________ TestSessionManager.test_get_session ______________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:336: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.services.contents.manager.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def setUp(self): 234s > self.sm = SessionManager( 234s kernel_manager=DummyMKM(), 234s contents_manager=ContentsManager(), 234s ) 234s 234s notebook/services/sessions/tests/test_sessionmanager.py:45: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:327: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:339: TypeError 234s _______________ TestSessionManager.test_get_session_dead_kernel ________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:336: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.services.contents.manager.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def setUp(self): 234s > self.sm = SessionManager( 234s kernel_manager=DummyMKM(), 234s contents_manager=ContentsManager(), 234s ) 234s 234s notebook/services/sessions/tests/test_sessionmanager.py:45: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:327: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:339: TypeError 234s ____________________ TestSessionManager.test_list_sessions _____________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:336: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.services.contents.manager.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def setUp(self): 234s > self.sm = SessionManager( 234s kernel_manager=DummyMKM(), 234s contents_manager=ContentsManager(), 234s ) 234s 234s notebook/services/sessions/tests/test_sessionmanager.py:45: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:327: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:339: TypeError 234s ______________ TestSessionManager.test_list_sessions_dead_kernel _______________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:336: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.services.contents.manager.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def setUp(self): 234s > self.sm = SessionManager( 234s kernel_manager=DummyMKM(), 234s contents_manager=ContentsManager(), 234s ) 234s 234s notebook/services/sessions/tests/test_sessionmanager.py:45: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:327: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:339: TypeError 234s ____________________ TestSessionManager.test_update_session ____________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:336: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.services.contents.manager.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def setUp(self): 234s > self.sm = SessionManager( 234s kernel_manager=DummyMKM(), 234s contents_manager=ContentsManager(), 234s ) 234s 234s notebook/services/sessions/tests/test_sessionmanager.py:45: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:327: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:339: TypeError 234s _______________________________ test_help_output _______________________________ 234s 234s def test_help_output(): 234s """ipython notebook --help-all works""" 234s > check_help_all_output('notebook') 234s 234s notebook/tests/test_notebookapp.py:28: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s pkg = 'notebook', subcommand = None 234s 234s def check_help_all_output(pkg: str, subcommand: Sequence[str] | None = None) -> tuple[str, str]: 234s """test that `python -m PKG --help-all` works""" 234s cmd = [sys.executable, "-m", pkg] 234s if subcommand: 234s cmd.extend(subcommand) 234s cmd.append("--help-all") 234s out, err, rc = get_output_error_code(cmd) 234s > assert rc == 0, err 234s E AssertionError: Traceback (most recent call last): 234s E File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s E klass = self._resolve_string(klass) 234s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s E return import_item(string) 234s E ^^^^^^^^^^^^^^^^^^^ 234s E File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s E module = __import__(package, fromlist=[obj]) 234s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s E 234s E During handling of the above exception, another exception occurred: 234s E 234s E Traceback (most recent call last): 234s E File "", line 198, in _run_module_as_main 234s E File "", line 88, in _run_code 234s E File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/__main__.py", line 3, in 234s E app.launch_new_instance() 234s E File "/usr/lib/python3/dist-packages/jupyter_core/application.py", line 282, in launch_instance 234s E super().launch_instance(argv=argv, **kwargs) 234s E File "/usr/lib/python3/dist-packages/traitlets/config/application.py", line 1073, in launch_instance 234s E app = cls.instance(**kwargs) 234s E ^^^^^^^^^^^^^^^^^^^^^^ 234s E File "/usr/lib/python3/dist-packages/traitlets/config/configurable.py", line 583, in instance 234s E inst = cls(*args, **kwargs) 234s E ^^^^^^^^^^^^^^^^^^^^ 234s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s E inst.setup_instance(*args, **kwargs) 234s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s E super(HasTraits, self).setup_instance(*args, **kwargs) 234s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s E init(self) 234s E File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s E self._resolve_classes() 234s E File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s E warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s /usr/lib/python3/dist-packages/traitlets/tests/utils.py:38: AssertionError 234s ____________________________ test_server_info_file _____________________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s def test_server_info_file(): 234s td = TemporaryDirectory() 234s > nbapp = NotebookApp(runtime_dir=td.name, log=logging.getLogger()) 234s 234s notebook/tests/test_notebookapp.py:32: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s _________________________________ test_nb_dir __________________________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s def test_nb_dir(): 234s with TemporaryDirectory() as td: 234s > app = NotebookApp(notebook_dir=td) 234s 234s notebook/tests/test_notebookapp.py:49: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s ____________________________ test_no_create_nb_dir _____________________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s def test_no_create_nb_dir(): 234s with TemporaryDirectory() as td: 234s nbdir = os.path.join(td, 'notebooks') 234s > app = NotebookApp() 234s 234s notebook/tests/test_notebookapp.py:55: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s _____________________________ test_missing_nb_dir ______________________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s def test_missing_nb_dir(): 234s with TemporaryDirectory() as td: 234s nbdir = os.path.join(td, 'notebook', 'dir', 'is', 'missing') 234s > app = NotebookApp() 234s 234s notebook/tests/test_notebookapp.py:62: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s _____________________________ test_invalid_nb_dir ______________________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s def test_invalid_nb_dir(): 234s with NamedTemporaryFile() as tf: 234s > app = NotebookApp() 234s 234s notebook/tests/test_notebookapp.py:68: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s ____________________________ test_nb_dir_with_slash ____________________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s def test_nb_dir_with_slash(): 234s with TemporaryDirectory(suffix="_slash" + os.sep) as td: 234s > app = NotebookApp(notebook_dir=td) 234s 234s notebook/tests/test_notebookapp.py:74: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s _______________________________ test_nb_dir_root _______________________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s def test_nb_dir_root(): 234s root = os.path.abspath(os.sep) # gets the right value on Windows, Posix 234s > app = NotebookApp(notebook_dir=root) 234s 234s notebook/tests/test_notebookapp.py:79: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s _____________________________ test_generate_config _____________________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s def test_generate_config(): 234s with TemporaryDirectory() as td: 234s > app = NotebookApp(config_dir=td) 234s 234s notebook/tests/test_notebookapp.py:84: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s ____________________________ test_notebook_password ____________________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s def test_notebook_password(): 234s password = 'secret' 234s with TemporaryDirectory() as td: 234s with patch.dict('os.environ', { 234s 'JUPYTER_CONFIG_DIR': td, 234s }), patch.object(getpass, 'getpass', return_value=password): 234s app = notebookapp.NotebookPasswordApp(log_level=logging.ERROR) 234s app.initialize([]) 234s app.start() 234s > nb = NotebookApp() 234s 234s notebook/tests/test_notebookapp.py:133: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s _________________ TestInstallServerExtension.test_merge_config _________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def test_merge_config(self): 234s # enabled at sys level 234s mock_sys = self._inject_mock_extension('mockext_sys') 234s # enabled at sys, disabled at user 234s mock_both = self._inject_mock_extension('mockext_both') 234s # enabled at user 234s mock_user = self._inject_mock_extension('mockext_user') 234s # enabled at Python 234s mock_py = self._inject_mock_extension('mockext_py') 234s 234s toggle_serverextension_python('mockext_sys', enabled=True, user=False) 234s toggle_serverextension_python('mockext_user', enabled=True, user=True) 234s toggle_serverextension_python('mockext_both', enabled=True, user=False) 234s toggle_serverextension_python('mockext_both', enabled=False, user=True) 234s 234s > app = NotebookApp(nbserver_extensions={'mockext_py': True}) 234s 234s notebook/tests/test_serverextensions.py:147: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s _________________ TestOrderedServerExtension.test_load_ordered _________________ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s > klass = self._resolve_string(klass) 234s 234s notebook/traittypes.py:235: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 234s return import_item(string) 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s name = 'jupyter_server.contents.services.managers.ContentsManager' 234s 234s def import_item(name: str) -> Any: 234s """Import and return ``bar`` given the string ``foo.bar``. 234s 234s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 234s executing the code ``from foo import bar``. 234s 234s Parameters 234s ---------- 234s name : string 234s The fully qualified name of the module/package being imported. 234s 234s Returns 234s ------- 234s mod : module object 234s The module that was imported. 234s """ 234s if not isinstance(name, str): 234s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 234s parts = name.rsplit(".", 1) 234s if len(parts) == 2: 234s # called with 'foo.bar....' 234s package, obj = parts 234s > module = __import__(package, fromlist=[obj]) 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s 234s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 234s 234s During handling of the above exception, another exception occurred: 234s 234s self = 234s 234s def test_load_ordered(self): 234s > app = NotebookApp() 234s 234s notebook/tests/test_serverextensions.py:189: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 234s inst.setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 234s init(self) 234s notebook/traittypes.py:226: in instance_init 234s self._resolve_classes() 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s self = 234s 234s def _resolve_classes(self): 234s # Resolve all string names to actual classes. 234s self.importable_klasses = [] 234s for klass in self.klasses: 234s if isinstance(klass, str): 234s try: 234s klass = self._resolve_string(klass) 234s self.importable_klasses.append(klass) 234s except: 234s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s notebook/traittypes.py:238: TypeError 234s _______________________________ test_help_output _______________________________ 234s 234s def test_help_output(): 234s """jupyter notebook --help-all works""" 234s # FIXME: will be notebook 234s > check_help_all_output('notebook') 234s 234s notebook/tests/test_utils.py:21: 234s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 234s 234s pkg = 'notebook', subcommand = None 234s 234s def check_help_all_output(pkg: str, subcommand: Sequence[str] | None = None) -> tuple[str, str]: 234s """test that `python -m PKG --help-all` works""" 234s cmd = [sys.executable, "-m", pkg] 234s if subcommand: 234s cmd.extend(subcommand) 234s cmd.append("--help-all") 234s out, err, rc = get_output_error_code(cmd) 234s > assert rc == 0, err 234s E AssertionError: Traceback (most recent call last): 234s E File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s E klass = self._resolve_string(klass) 234s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s E return import_item(string) 234s E ^^^^^^^^^^^^^^^^^^^ 234s E File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s E module = __import__(package, fromlist=[obj]) 234s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s E ModuleNotFoundError: No module named 'jupyter_server' 234s E 234s E During handling of the above exception, another exception occurred: 234s E 234s E Traceback (most recent call last): 234s E File "", line 198, in _run_module_as_main 234s E File "", line 88, in _run_code 234s E File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/__main__.py", line 3, in 234s E app.launch_new_instance() 234s E File "/usr/lib/python3/dist-packages/jupyter_core/application.py", line 282, in launch_instance 234s E super().launch_instance(argv=argv, **kwargs) 234s E File "/usr/lib/python3/dist-packages/traitlets/config/application.py", line 1073, in launch_instance 234s E app = cls.instance(**kwargs) 234s E ^^^^^^^^^^^^^^^^^^^^^^ 234s E File "/usr/lib/python3/dist-packages/traitlets/config/configurable.py", line 583, in instance 234s E inst = cls(*args, **kwargs) 234s E ^^^^^^^^^^^^^^^^^^^^ 234s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s E inst.setup_instance(*args, **kwargs) 234s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s E super(HasTraits, self).setup_instance(*args, **kwargs) 234s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s E init(self) 234s E File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s E self._resolve_classes() 234s E File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s E warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s /usr/lib/python3/dist-packages/traitlets/tests/utils.py:38: AssertionError 234s =============================== warnings summary =============================== 234s notebook/nbextensions.py:15 234s /tmp/autopkgtest.E327Mm/build.4bM/src/notebook/nbextensions.py:15: DeprecationWarning: Jupyter is migrating its paths to use standard platformdirs 234s given by the platformdirs library. To remove this warning and 234s see the appropriate new directories, set the environment variable 234s `JUPYTER_PLATFORM_DIRS=1` and then run `jupyter --paths`. 234s The use of platformdirs will be the default in `jupyter_core` v6 234s from jupyter_core.paths import ( 234s 234s notebook/utils.py:280 234s notebook/utils.py:280 234s /tmp/autopkgtest.E327Mm/build.4bM/src/notebook/utils.py:280: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. 234s return LooseVersion(v) >= LooseVersion(check) 234s 234s notebook/_tz.py:29: 1 warning 234s notebook/services/sessions/tests/test_sessionmanager.py: 9 warnings 234s /tmp/autopkgtest.E327Mm/build.4bM/src/notebook/_tz.py:29: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC). 234s dt = unaware(*args, **kwargs) 234s 234s notebook/tests/test_notebookapp_integration.py:14 234s /tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/test_notebookapp_integration.py:14: PytestUnknownMarkWarning: Unknown pytest.mark.integration_tests - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html 234s pytestmark = pytest.mark.integration_tests 234s 234s notebook/auth/tests/test_login.py::LoginTest::test_next_bad 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-1 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_import_error 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-2 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/api/tests/test_api.py::APITest::test_get_spec 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-3 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/config/tests/test_config_api.py::APITest::test_create_retrieve_config 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-4 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/contents/tests/test_contents_api.py::APITest::test_checkpoints 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-5 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_checkpoints 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-6 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/contents/tests/test_largefilemanager.py: 42 warnings 234s notebook/services/contents/tests/test_manager.py: 526 warnings 234s /tmp/autopkgtest.E327Mm/build.4bM/src/notebook/_tz.py:29: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC). 234s dt = unaware(*args, **kwargs) 234s 234s notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_connections 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-7 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_connections 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-8 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/kernels/tests/test_kernels_api.py::KernelFilterTest::test_config 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-9 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/kernels/tests/test_kernels_api.py::KernelCullingTest::test_culling 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-10 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernel_resource_file 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-11 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/nbconvert/tests/test_nbconvert_api.py::APITest::test_list_formats 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-12 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-13 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-14 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-15 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/terminal/tests/test_terminals_api.py::TerminalCullingTest::test_config 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-16 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/tests/test_files.py::FilesTest::test_contents_manager 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-17 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/tests/test_gateway.py::TestGateway::test_gateway_class_mappings 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-18 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/tests/test_nbextensions.py::TestInstallNBExtension::test_install_tar 234s notebook/tests/test_nbextensions.py::TestInstallNBExtension::test_install_tar 234s notebook/tests/test_nbextensions.py::TestInstallNBExtension::test_install_tar 234s /tmp/autopkgtest.E327Mm/build.4bM/src/notebook/nbextensions.py:154: DeprecationWarning: Python 3.14 will, by default, filter extracted tar archives and reject files or modify their metadata. Use the filter argument to control this behavior. 234s archive.extractall(nbext) 234s 234s notebook/tests/test_notebookapp.py::NotebookAppTests::test_list_running_servers 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-19 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/tests/test_notebookapp.py::NotebookUnixSocketTests::test_list_running_sock_servers 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-20 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/tests/test_notebookapp.py::NotebookAppJSONLoggingTests::test_log_json_enabled 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-21 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/tests/test_paths.py::RedirectTestCase::test_trailing_slash 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-22 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s notebook/tree/tests/test_tree_handler.py::TreeTest::test_redirect 234s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-23 (start_thread) 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 235, in _resolve_classes 234s klass = self._resolve_string(klass) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 234s return import_item(string) 234s ^^^^^^^^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 234s module = __import__(package, fromlist=[obj]) 234s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 234s ModuleNotFoundError: No module named 'jupyter_server' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 155, in start_thread 234s app = cls.notebook = NotebookApp( 234s ^^^^^^^^^^^^ 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 234s inst.setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 234s super(HasTraits, self).setup_instance(*args, **kwargs) 234s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 234s init(self) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 226, in instance_init 234s self._resolve_classes() 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/traittypes.py", line 238, in _resolve_classes 234s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 234s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 234s 234s During handling of the above exception, another exception occurred: 234s 234s Traceback (most recent call last): 234s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 234s self.run() 234s File "/usr/lib/python3.12/threading.py", line 1010, in run 234s self._target(*self._args, **self._kwargs) 234s File "/tmp/autopkgtest.E327Mm/build.4bM/src/notebook/tests/launchnotebook.py", line 193, in start_thread 234s app.session_manager.close() 234s ^^^ 234s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 234s 234s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 234s 234s -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 234s =========================== short test summary info ============================ 234s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_bad_delete_session 234s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_bad_get_session 234s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_bad_update_session 234s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_delete_session 234s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_get_session 234s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_get_session_dead_kernel 234s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_list_sessions 234s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_list_sessions_dead_kernel 234s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_update_session 234s FAILED notebook/tests/test_notebookapp.py::test_help_output - AssertionError:... 234s FAILED notebook/tests/test_notebookapp.py::test_server_info_file - TypeError:... 234s FAILED notebook/tests/test_notebookapp.py::test_nb_dir - TypeError: warn() mi... 234s FAILED notebook/tests/test_notebookapp.py::test_no_create_nb_dir - TypeError:... 234s FAILED notebook/tests/test_notebookapp.py::test_missing_nb_dir - TypeError: w... 234s FAILED notebook/tests/test_notebookapp.py::test_invalid_nb_dir - TypeError: w... 234s FAILED notebook/tests/test_notebookapp.py::test_nb_dir_with_slash - TypeError... 234s FAILED notebook/tests/test_notebookapp.py::test_nb_dir_root - TypeError: warn... 234s FAILED notebook/tests/test_notebookapp.py::test_generate_config - TypeError: ... 234s FAILED notebook/tests/test_notebookapp.py::test_notebook_password - TypeError... 234s FAILED notebook/tests/test_serverextensions.py::TestInstallServerExtension::test_merge_config 234s FAILED notebook/tests/test_serverextensions.py::TestOrderedServerExtension::test_load_ordered 234s FAILED notebook/tests/test_utils.py::test_help_output - AssertionError: Trace... 234s ERROR notebook/auth/tests/test_login.py::LoginTest::test_next_bad - RuntimeEr... 234s ERROR notebook/auth/tests/test_login.py::LoginTest::test_next_ok - RuntimeErr... 234s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_import_error 234s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_invoke 234s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_not_enabled 234s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_missing_bundler_arg 234s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_notebook_not_found 234s ERROR notebook/services/api/tests/test_api.py::APITest::test_get_spec - Runti... 234s ERROR notebook/services/api/tests/test_api.py::APITest::test_get_status - Run... 234s ERROR notebook/services/api/tests/test_api.py::APITest::test_no_track_activity 234s ERROR notebook/services/config/tests/test_config_api.py::APITest::test_create_retrieve_config 234s ERROR notebook/services/config/tests/test_config_api.py::APITest::test_get_unknown 234s ERROR notebook/services/config/tests/test_config_api.py::APITest::test_modify 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_checkpoints 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_checkpoints_separate_root 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_400_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_copy 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_dir_400 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_path 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_put_400 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_put_400_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_create_untitled 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_create_untitled_txt 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_delete_hidden_dir 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_delete_hidden_file 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_file_checkpoints 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_404_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_bad_type 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_binary_file_contents 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_contents_no_such_file 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_dir_no_content 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_nb_contents 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_nb_invalid 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_nb_no_content 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_text_file_contents 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_list_dirs 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_list_nonexistant_dir 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_list_notebooks 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_mkdir 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_mkdir_hidden_400 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_mkdir_untitled 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_rename 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_rename_400_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_rename_existing 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_save 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_b64 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_txt 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_txt_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_v2 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_checkpoints 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_checkpoints_separate_root 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_config_did_something 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_400_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_copy 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_dir_400 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_path 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_put_400 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_put_400_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_create_untitled 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_create_untitled_txt 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_delete_hidden_dir 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_delete_hidden_file 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_file_checkpoints 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_404_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_bad_type 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_binary_file_contents 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_contents_no_such_file 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_dir_no_content 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_nb_contents 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_nb_invalid 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_nb_no_content 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_text_file_contents 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_list_dirs 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_list_nonexistant_dir 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_list_notebooks 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_mkdir 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_mkdir_hidden_400 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_mkdir_untitled 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_rename 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_rename_400_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_rename_existing 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_save 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_b64 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_txt 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_txt_hidden 234s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_v2 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_connections 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_default_kernel 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_kernel_handler 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_main_kernel_handler 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_no_kernels 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_connections 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_default_kernel 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_kernel_handler 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_main_kernel_handler 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_no_kernels 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelFilterTest::test_config 234s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelCullingTest::test_culling 234s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernel_resource_file 234s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernelspec 234s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernelspec_spaces 234s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_nonexistant_kernelspec 234s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_nonexistant_resource 234s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_list_kernelspecs 234s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_list_kernelspecs_bad 234s ERROR notebook/services/nbconvert/tests/test_nbconvert_api.py::APITest::test_list_formats 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_console_session 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_deprecated 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_file_session 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_with_kernel_id 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_delete 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_kernel_id 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_kernel_name 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_path 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_path_deprecated 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_type 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_console_session 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_deprecated 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_file_session 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_with_kernel_id 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_delete 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_kernel_id 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_kernel_name 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_path 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_path_deprecated 234s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_type 234s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal 234s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal_via_get 234s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal_with_name 234s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_no_terminals 234s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_terminal_handler 234s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_terminal_root_handler 234s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalCullingTest::test_config 234s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalCullingTest::test_culling 234s ERROR notebook/tests/test_files.py::FilesTest::test_contents_manager - Runtim... 234s ERROR notebook/tests/test_files.py::FilesTest::test_download - RuntimeError: ... 234s ERROR notebook/tests/test_files.py::FilesTest::test_hidden_files - RuntimeErr... 234s ERROR notebook/tests/test_files.py::FilesTest::test_old_files_redirect - Runt... 234s ERROR notebook/tests/test_files.py::FilesTest::test_view_html - RuntimeError:... 234s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_class_mappings 234s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_get_kernelspecs 234s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_get_named_kernelspec 234s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_kernel_lifecycle 234s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_options - Run... 234s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_session_lifecycle 234s ERROR notebook/tests/test_notebookapp.py::NotebookAppTests::test_list_running_servers 234s ERROR notebook/tests/test_notebookapp.py::NotebookAppTests::test_log_json_default 234s ERROR notebook/tests/test_notebookapp.py::NotebookAppTests::test_validate_log_json 234s ERROR notebook/tests/test_notebookapp.py::NotebookUnixSocketTests::test_list_running_sock_servers 234s ERROR notebook/tests/test_notebookapp.py::NotebookUnixSocketTests::test_run 234s ERROR notebook/tests/test_notebookapp.py::NotebookAppJSONLoggingTests::test_log_json_enabled 234s ERROR notebook/tests/test_notebookapp.py::NotebookAppJSONLoggingTests::test_validate_log_json 234s ERROR notebook/tests/test_paths.py::RedirectTestCase::test_trailing_slash - R... 234s ERROR notebook/tree/tests/test_tree_handler.py::TreeTest::test_redirect - Run... 234s = 22 failed, 123 passed, 20 skipped, 5 deselected, 608 warnings, 160 errors in 39.06s = 235s autopkgtest [10:29:26]: test pytest: -----------------------] 236s pytest FAIL non-zero exit status 1 236s autopkgtest [10:29:27]: test pytest: - - - - - - - - - - results - - - - - - - - - - 236s autopkgtest [10:29:27]: test command1: preparing testbed 425s autopkgtest [10:32:36]: testbed dpkg architecture: ppc64el 425s autopkgtest [10:32:36]: testbed apt version: 2.9.5 425s autopkgtest [10:32:36]: @@@@@@@@@@@@@@@@@@@@ test bed setup 426s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [110 kB] 426s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [7052 B] 426s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [36.1 kB] 426s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [2576 B] 426s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [389 kB] 426s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main ppc64el Packages [42.8 kB] 426s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/restricted ppc64el Packages [1860 B] 426s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/universe ppc64el Packages [312 kB] 427s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse ppc64el Packages [2532 B] 427s Fetched 905 kB in 1s (1097 kB/s) 427s Reading package lists... 429s Reading package lists... 429s Building dependency tree... 429s Reading state information... 429s Calculating upgrade... 429s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 429s Reading package lists... 429s Building dependency tree... 429s Reading state information... 430s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 430s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 430s Get:2 http://ftpmaster.internal/ubuntu oracular InRelease [110 kB] 430s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 430s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 430s Get:5 http://ftpmaster.internal/ubuntu oracular/universe Sources [20.1 MB] 432s Get:6 http://ftpmaster.internal/ubuntu oracular/main Sources [1384 kB] 432s Get:7 http://ftpmaster.internal/ubuntu oracular/main ppc64el Packages [1348 kB] 432s Get:8 http://ftpmaster.internal/ubuntu oracular/universe ppc64el Packages [15.2 MB] 438s Fetched 38.1 MB in 8s (4654 kB/s) 439s Reading package lists... 439s Reading package lists... 439s Building dependency tree... 439s Reading state information... 440s Calculating upgrade... 440s The following packages will be upgraded: 440s libldap-common libldap2 440s 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 440s Need to get 262 kB of archives. 440s After this operation, 0 B of additional disk space will be used. 440s Get:1 http://ftpmaster.internal/ubuntu oracular/main ppc64el libldap-common all 2.6.7+dfsg-1~exp1ubuntu9 [31.5 kB] 440s Get:2 http://ftpmaster.internal/ubuntu oracular/main ppc64el libldap2 ppc64el 2.6.7+dfsg-1~exp1ubuntu9 [231 kB] 441s Fetched 262 kB in 0s (618 kB/s) 441s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 72676 files and directories currently installed.) 441s Preparing to unpack .../libldap-common_2.6.7+dfsg-1~exp1ubuntu9_all.deb ... 441s Unpacking libldap-common (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 441s Preparing to unpack .../libldap2_2.6.7+dfsg-1~exp1ubuntu9_ppc64el.deb ... 441s Unpacking libldap2:ppc64el (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 441s Setting up libldap-common (2.6.7+dfsg-1~exp1ubuntu9) ... 441s Setting up libldap2:ppc64el (2.6.7+dfsg-1~exp1ubuntu9) ... 441s Processing triggers for man-db (2.12.1-2) ... 441s Processing triggers for libc-bin (2.39-0ubuntu9) ... 441s Reading package lists... 441s Building dependency tree... 441s Reading state information... 442s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 445s Reading package lists... 445s Building dependency tree... 445s Reading state information... 446s Starting pkgProblemResolver with broken count: 0 446s Starting 2 pkgProblemResolver with broken count: 0 446s Done 446s The following additional packages will be installed: 446s fonts-font-awesome fonts-glyphicons-halflings fonts-lato fonts-mathjax gdb 446s jupyter-core jupyter-notebook libbabeltrace1 libdebuginfod-common 446s libdebuginfod1t64 libjs-backbone libjs-bootstrap libjs-bootstrap-tour 446s libjs-codemirror libjs-es6-promise libjs-jed libjs-jquery 446s libjs-jquery-typeahead libjs-jquery-ui libjs-marked libjs-mathjax 446s libjs-moment libjs-requirejs libjs-requirejs-text libjs-sphinxdoc 446s libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 libpgm-5.3-0t64 446s libpython3.12t64 libsodium23 libsource-highlight-common 446s libsource-highlight4t64 libzmq5 node-jed python-notebook-doc 446s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 446s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 446s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 446s python3-executing python3-fastjsonschema python3-html5lib python3-ipykernel 446s python3-ipython python3-ipython-genutils python3-jedi python3-jupyter-client 446s python3-jupyter-core python3-jupyterlab-pygments python3-matplotlib-inline 446s python3-mistune python3-nbclient python3-nbconvert python3-nbformat 446s python3-nest-asyncio python3-notebook python3-packaging 446s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 446s python3-prometheus-client python3-prompt-toolkit python3-psutil 446s python3-ptyprocess python3-pure-eval python3-py python3-pydevd 446s python3-send2trash python3-soupsieve python3-stack-data python3-terminado 446s python3-tinycss2 python3-tornado python3-traitlets python3-typeshed 446s python3-wcwidth python3-webencodings python3-zmq sphinx-rtd-theme-common 446s Suggested packages: 446s gdb-doc gdbserver libjs-jquery-lazyload libjs-json libjs-jquery-ui-docs 446s fonts-mathjax-extras fonts-stix libjs-mathjax-doc python-argon2-doc 446s python-bleach-doc python-bytecode-doc python-coverage-doc 446s python-fastjsonschema-doc python3-genshi python3-lxml python-ipython-doc 446s python3-pip python-nbconvert-doc texlive-fonts-recommended 446s texlive-plain-generic texlive-xetex python-pexpect-doc subversion 446s python3-pytest pydevd python-terminado-doc python-tinycss2-doc 446s python3-pycurl python-tornado-doc python3-twisted 446s Recommended packages: 446s libc-dbg javascript-common python3-lxml python3-matplotlib pandoc 446s python3-ipywidgets 446s The following NEW packages will be installed: 446s autopkgtest-satdep fonts-font-awesome fonts-glyphicons-halflings fonts-lato 446s fonts-mathjax gdb jupyter-core jupyter-notebook libbabeltrace1 446s libdebuginfod-common libdebuginfod1t64 libjs-backbone libjs-bootstrap 446s libjs-bootstrap-tour libjs-codemirror libjs-es6-promise libjs-jed 446s libjs-jquery libjs-jquery-typeahead libjs-jquery-ui libjs-marked 446s libjs-mathjax libjs-moment libjs-requirejs libjs-requirejs-text 446s libjs-sphinxdoc libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 446s libpgm-5.3-0t64 libpython3.12t64 libsodium23 libsource-highlight-common 446s libsource-highlight4t64 libzmq5 node-jed python-notebook-doc 446s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 446s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 446s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 446s python3-executing python3-fastjsonschema python3-html5lib python3-ipykernel 446s python3-ipython python3-ipython-genutils python3-jedi python3-jupyter-client 446s python3-jupyter-core python3-jupyterlab-pygments python3-matplotlib-inline 446s python3-mistune python3-nbclient python3-nbconvert python3-nbformat 446s python3-nest-asyncio python3-notebook python3-packaging 446s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 446s python3-prometheus-client python3-prompt-toolkit python3-psutil 446s python3-ptyprocess python3-pure-eval python3-py python3-pydevd 446s python3-send2trash python3-soupsieve python3-stack-data python3-terminado 446s python3-tinycss2 python3-tornado python3-traitlets python3-typeshed 446s python3-wcwidth python3-webencodings python3-zmq sphinx-rtd-theme-common 446s 0 upgraded, 92 newly installed, 0 to remove and 0 not upgraded. 446s Need to get 34.6 MB/34.6 MB of archives. 446s After this operation, 182 MB of additional disk space will be used. 446s Get:1 /tmp/autopkgtest.E327Mm/2-autopkgtest-satdep.deb autopkgtest-satdep ppc64el 0 [728 B] 446s Get:2 http://ftpmaster.internal/ubuntu oracular/main ppc64el fonts-lato all 2.015-1 [2781 kB] 447s Get:3 http://ftpmaster.internal/ubuntu oracular/main ppc64el libdebuginfod-common all 0.191-1 [14.6 kB] 447s Get:4 http://ftpmaster.internal/ubuntu oracular/main ppc64el fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 447s Get:5 http://ftpmaster.internal/ubuntu oracular/universe ppc64el fonts-glyphicons-halflings all 1.009~3.4.1+dfsg-3 [118 kB] 447s Get:6 http://ftpmaster.internal/ubuntu oracular/main ppc64el fonts-mathjax all 2.7.9+dfsg-1 [2208 kB] 447s Get:7 http://ftpmaster.internal/ubuntu oracular/main ppc64el libbabeltrace1 ppc64el 1.5.11-3build3 [209 kB] 447s Get:8 http://ftpmaster.internal/ubuntu oracular/main ppc64el libdebuginfod1t64 ppc64el 0.191-1 [18.4 kB] 447s Get:9 http://ftpmaster.internal/ubuntu oracular/main ppc64el libpython3.12t64 ppc64el 3.12.4-1 [2542 kB] 447s Get:10 http://ftpmaster.internal/ubuntu oracular/main ppc64el libsource-highlight-common all 3.1.9-4.3build1 [64.2 kB] 447s Get:11 http://ftpmaster.internal/ubuntu oracular/main ppc64el libsource-highlight4t64 ppc64el 3.1.9-4.3build1 [288 kB] 447s Get:12 http://ftpmaster.internal/ubuntu oracular/main ppc64el gdb ppc64el 15.0.50.20240403-0ubuntu1 [5088 kB] 448s Get:13 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-platformdirs all 4.2.1-1 [16.3 kB] 448s Get:14 http://ftpmaster.internal/ubuntu oracular-proposed/universe ppc64el python3-traitlets all 5.14.3-1 [71.3 kB] 448s Get:15 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jupyter-core all 5.3.2-2 [25.5 kB] 448s Get:16 http://ftpmaster.internal/ubuntu oracular/universe ppc64el jupyter-core all 5.3.2-2 [4038 B] 448s Get:17 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 448s Get:18 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-backbone all 1.4.1~dfsg+~1.4.15-3 [185 kB] 448s Get:19 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-bootstrap all 3.4.1+dfsg-3 [129 kB] 448s Get:20 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 448s Get:21 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-bootstrap-tour all 0.12.0+dfsg-5 [21.4 kB] 448s Get:22 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-codemirror all 5.65.0+~cs5.83.9-3 [755 kB] 448s Get:23 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-es6-promise all 4.2.8-12 [14.1 kB] 448s Get:24 http://ftpmaster.internal/ubuntu oracular/universe ppc64el node-jed all 1.1.1-4 [15.2 kB] 448s Get:25 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-jed all 1.1.1-4 [2584 B] 448s Get:26 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-jquery-typeahead all 2.11.0+dfsg1-3 [48.9 kB] 448s Get:27 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-jquery-ui all 1.13.2+dfsg-1 [252 kB] 448s Get:28 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-marked all 4.2.3+ds+~4.0.7-3 [36.2 kB] 448s Get:29 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-mathjax all 2.7.9+dfsg-1 [5665 kB] 448s Get:30 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-moment all 2.29.4+ds-1 [147 kB] 448s Get:31 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-requirejs all 2.3.6+ds+~2.1.37-1 [201 kB] 448s Get:32 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-requirejs-text all 2.0.12-1.1 [9056 B] 448s Get:33 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-text-encoding all 0.7.0-5 [140 kB] 448s Get:34 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-xterm all 5.3.0-2 [476 kB] 448s Get:35 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-ptyprocess all 0.7.0-5 [15.1 kB] 448s Get:36 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-tornado ppc64el 6.4.1-1 [298 kB] 448s Get:37 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-terminado all 0.18.1-1 [13.2 kB] 448s Get:38 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-argon2 ppc64el 21.1.0-2build1 [21.7 kB] 448s Get:39 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-comm all 0.2.1-1 [7016 B] 448s Get:40 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-bytecode all 0.15.1-3 [44.7 kB] 448s Get:41 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-coverage ppc64el 7.4.4+dfsg1-0ubuntu2 [149 kB] 448s Get:42 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pydevd ppc64el 2.10.0+ds-10ubuntu1 [655 kB] 448s Get:43 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-debugpy all 1.8.0+ds-4ubuntu4 [67.6 kB] 448s Get:44 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-decorator all 5.1.1-5 [10.1 kB] 448s Get:45 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-parso all 0.8.3-1 [67.2 kB] 448s Get:46 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-typeshed all 0.0~git20231111.6764465-3 [1274 kB] 448s Get:47 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jedi all 0.19.1+ds1-1 [693 kB] 448s Get:48 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-matplotlib-inline all 0.1.6-2 [8784 B] 448s Get:49 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-pexpect all 4.9-2 [48.1 kB] 448s Get:50 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 448s Get:51 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-prompt-toolkit all 3.0.46-1 [256 kB] 448s Get:52 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-asttokens all 2.4.1-1 [20.9 kB] 448s Get:53 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-executing all 2.0.1-0.1 [23.3 kB] 448s Get:54 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pure-eval all 0.2.2-2 [11.1 kB] 448s Get:55 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-stack-data all 0.6.3-1 [22.0 kB] 448s Get:56 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-ipython all 8.20.0-1ubuntu1 [561 kB] 448s Get:57 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-dateutil all 2.9.0-2 [80.3 kB] 448s Get:58 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-entrypoints all 0.4-2 [7146 B] 448s Get:59 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nest-asyncio all 1.5.4-1 [6256 B] 448s Get:60 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-py all 1.11.0-2 [72.7 kB] 448s Get:61 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libnorm1t64 ppc64el 1.5.9+dfsg-3.1build1 [194 kB] 448s Get:62 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libpgm-5.3-0t64 ppc64el 5.3.128~dfsg-2.1build1 [185 kB] 448s Get:63 http://ftpmaster.internal/ubuntu oracular/main ppc64el libsodium23 ppc64el 1.0.18-1build3 [150 kB] 448s Get:64 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libzmq5 ppc64el 4.3.5-1build2 [297 kB] 448s Get:65 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-zmq ppc64el 24.0.1-5build1 [316 kB] 448s Get:66 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jupyter-client all 7.4.9-2ubuntu1 [90.5 kB] 448s Get:67 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-packaging all 24.0-1 [41.1 kB] 448s Get:68 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-psutil ppc64el 5.9.8-2build2 [197 kB] 448s Get:69 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-ipykernel all 6.29.3-1ubuntu1 [82.6 kB] 448s Get:70 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-ipython-genutils all 0.2.0-6 [22.0 kB] 448s Get:71 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-webencodings all 0.5.1-5 [11.5 kB] 448s Get:72 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-html5lib all 1.1-6 [88.8 kB] 448s Get:73 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-bleach all 6.1.0-2 [49.6 kB] 448s Get:74 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-soupsieve all 2.5-1 [33.0 kB] 448s Get:75 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-bs4 all 4.12.3-1 [109 kB] 448s Get:76 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-defusedxml all 0.7.1-2 [42.0 kB] 449s Get:77 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jupyterlab-pygments all 0.2.2-3 [6054 B] 449s Get:78 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-mistune all 3.0.2-1 [32.8 kB] 449s Get:79 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-fastjsonschema all 2.19.1-1 [19.7 kB] 449s Get:80 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nbformat all 5.9.1-1 [41.2 kB] 449s Get:81 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nbclient all 0.8.0-1 [55.6 kB] 449s Get:82 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pandocfilters all 1.5.1-1 [23.6 kB] 449s Get:83 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python-tinycss2-common all 1.3.0-1 [34.1 kB] 449s Get:84 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-tinycss2 all 1.3.0-1 [19.6 kB] 449s Get:85 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nbconvert all 7.16.4-1 [156 kB] 449s Get:86 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-prometheus-client all 0.19.0+ds1-1 [41.7 kB] 449s Get:87 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-send2trash all 1.8.2-1 [15.5 kB] 449s Get:88 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-notebook all 6.4.12-2.2ubuntu1 [1566 kB] 449s Get:89 http://ftpmaster.internal/ubuntu oracular/universe ppc64el jupyter-notebook all 6.4.12-2.2ubuntu1 [10.4 kB] 449s Get:90 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-sphinxdoc all 7.2.6-8 [150 kB] 449s Get:91 http://ftpmaster.internal/ubuntu oracular/main ppc64el sphinx-rtd-theme-common all 2.0.0+dfsg-1 [1012 kB] 449s Get:92 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python-notebook-doc all 6.4.12-2.2ubuntu1 [2540 kB] 449s Preconfiguring packages ... 449s Fetched 34.6 MB in 3s (13.1 MB/s) 449s Selecting previously unselected package fonts-lato. 449s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 72676 files and directories currently installed.) 449s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 449s Unpacking fonts-lato (2.015-1) ... 450s Selecting previously unselected package libdebuginfod-common. 450s Preparing to unpack .../01-libdebuginfod-common_0.191-1_all.deb ... 450s Unpacking libdebuginfod-common (0.191-1) ... 450s Selecting previously unselected package fonts-font-awesome. 450s Preparing to unpack .../02-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 450s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 450s Selecting previously unselected package fonts-glyphicons-halflings. 450s Preparing to unpack .../03-fonts-glyphicons-halflings_1.009~3.4.1+dfsg-3_all.deb ... 450s Unpacking fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 450s Selecting previously unselected package fonts-mathjax. 450s Preparing to unpack .../04-fonts-mathjax_2.7.9+dfsg-1_all.deb ... 450s Unpacking fonts-mathjax (2.7.9+dfsg-1) ... 450s Selecting previously unselected package libbabeltrace1:ppc64el. 450s Preparing to unpack .../05-libbabeltrace1_1.5.11-3build3_ppc64el.deb ... 450s Unpacking libbabeltrace1:ppc64el (1.5.11-3build3) ... 450s Selecting previously unselected package libdebuginfod1t64:ppc64el. 450s Preparing to unpack .../06-libdebuginfod1t64_0.191-1_ppc64el.deb ... 450s Unpacking libdebuginfod1t64:ppc64el (0.191-1) ... 450s Selecting previously unselected package libpython3.12t64:ppc64el. 450s Preparing to unpack .../07-libpython3.12t64_3.12.4-1_ppc64el.deb ... 450s Unpacking libpython3.12t64:ppc64el (3.12.4-1) ... 450s Selecting previously unselected package libsource-highlight-common. 450s Preparing to unpack .../08-libsource-highlight-common_3.1.9-4.3build1_all.deb ... 450s Unpacking libsource-highlight-common (3.1.9-4.3build1) ... 450s Selecting previously unselected package libsource-highlight4t64:ppc64el. 450s Preparing to unpack .../09-libsource-highlight4t64_3.1.9-4.3build1_ppc64el.deb ... 450s Unpacking libsource-highlight4t64:ppc64el (3.1.9-4.3build1) ... 450s Selecting previously unselected package gdb. 450s Preparing to unpack .../10-gdb_15.0.50.20240403-0ubuntu1_ppc64el.deb ... 450s Unpacking gdb (15.0.50.20240403-0ubuntu1) ... 450s Selecting previously unselected package python3-platformdirs. 450s Preparing to unpack .../11-python3-platformdirs_4.2.1-1_all.deb ... 450s Unpacking python3-platformdirs (4.2.1-1) ... 450s Selecting previously unselected package python3-traitlets. 450s Preparing to unpack .../12-python3-traitlets_5.14.3-1_all.deb ... 450s Unpacking python3-traitlets (5.14.3-1) ... 450s Selecting previously unselected package python3-jupyter-core. 450s Preparing to unpack .../13-python3-jupyter-core_5.3.2-2_all.deb ... 450s Unpacking python3-jupyter-core (5.3.2-2) ... 450s Selecting previously unselected package jupyter-core. 450s Preparing to unpack .../14-jupyter-core_5.3.2-2_all.deb ... 450s Unpacking jupyter-core (5.3.2-2) ... 450s Selecting previously unselected package libjs-underscore. 450s Preparing to unpack .../15-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 450s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 450s Selecting previously unselected package libjs-backbone. 450s Preparing to unpack .../16-libjs-backbone_1.4.1~dfsg+~1.4.15-3_all.deb ... 450s Unpacking libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 450s Selecting previously unselected package libjs-bootstrap. 450s Preparing to unpack .../17-libjs-bootstrap_3.4.1+dfsg-3_all.deb ... 450s Unpacking libjs-bootstrap (3.4.1+dfsg-3) ... 451s Selecting previously unselected package libjs-jquery. 451s Preparing to unpack .../18-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 451s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 451s Selecting previously unselected package libjs-bootstrap-tour. 451s Preparing to unpack .../19-libjs-bootstrap-tour_0.12.0+dfsg-5_all.deb ... 451s Unpacking libjs-bootstrap-tour (0.12.0+dfsg-5) ... 451s Selecting previously unselected package libjs-codemirror. 451s Preparing to unpack .../20-libjs-codemirror_5.65.0+~cs5.83.9-3_all.deb ... 451s Unpacking libjs-codemirror (5.65.0+~cs5.83.9-3) ... 451s Selecting previously unselected package libjs-es6-promise. 451s Preparing to unpack .../21-libjs-es6-promise_4.2.8-12_all.deb ... 451s Unpacking libjs-es6-promise (4.2.8-12) ... 451s Selecting previously unselected package node-jed. 451s Preparing to unpack .../22-node-jed_1.1.1-4_all.deb ... 451s Unpacking node-jed (1.1.1-4) ... 451s Selecting previously unselected package libjs-jed. 451s Preparing to unpack .../23-libjs-jed_1.1.1-4_all.deb ... 451s Unpacking libjs-jed (1.1.1-4) ... 451s Selecting previously unselected package libjs-jquery-typeahead. 451s Preparing to unpack .../24-libjs-jquery-typeahead_2.11.0+dfsg1-3_all.deb ... 451s Unpacking libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 451s Selecting previously unselected package libjs-jquery-ui. 451s Preparing to unpack .../25-libjs-jquery-ui_1.13.2+dfsg-1_all.deb ... 451s Unpacking libjs-jquery-ui (1.13.2+dfsg-1) ... 451s Selecting previously unselected package libjs-marked. 451s Preparing to unpack .../26-libjs-marked_4.2.3+ds+~4.0.7-3_all.deb ... 451s Unpacking libjs-marked (4.2.3+ds+~4.0.7-3) ... 451s Selecting previously unselected package libjs-mathjax. 451s Preparing to unpack .../27-libjs-mathjax_2.7.9+dfsg-1_all.deb ... 451s Unpacking libjs-mathjax (2.7.9+dfsg-1) ... 452s Selecting previously unselected package libjs-moment. 452s Preparing to unpack .../28-libjs-moment_2.29.4+ds-1_all.deb ... 452s Unpacking libjs-moment (2.29.4+ds-1) ... 452s Selecting previously unselected package libjs-requirejs. 452s Preparing to unpack .../29-libjs-requirejs_2.3.6+ds+~2.1.37-1_all.deb ... 452s Unpacking libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 452s Selecting previously unselected package libjs-requirejs-text. 452s Preparing to unpack .../30-libjs-requirejs-text_2.0.12-1.1_all.deb ... 452s Unpacking libjs-requirejs-text (2.0.12-1.1) ... 452s Selecting previously unselected package libjs-text-encoding. 452s Preparing to unpack .../31-libjs-text-encoding_0.7.0-5_all.deb ... 452s Unpacking libjs-text-encoding (0.7.0-5) ... 452s Selecting previously unselected package libjs-xterm. 452s Preparing to unpack .../32-libjs-xterm_5.3.0-2_all.deb ... 452s Unpacking libjs-xterm (5.3.0-2) ... 452s Selecting previously unselected package python3-ptyprocess. 452s Preparing to unpack .../33-python3-ptyprocess_0.7.0-5_all.deb ... 452s Unpacking python3-ptyprocess (0.7.0-5) ... 452s Selecting previously unselected package python3-tornado. 452s Preparing to unpack .../34-python3-tornado_6.4.1-1_ppc64el.deb ... 452s Unpacking python3-tornado (6.4.1-1) ... 452s Selecting previously unselected package python3-terminado. 452s Preparing to unpack .../35-python3-terminado_0.18.1-1_all.deb ... 452s Unpacking python3-terminado (0.18.1-1) ... 452s Selecting previously unselected package python3-argon2. 452s Preparing to unpack .../36-python3-argon2_21.1.0-2build1_ppc64el.deb ... 452s Unpacking python3-argon2 (21.1.0-2build1) ... 452s Selecting previously unselected package python3-comm. 452s Preparing to unpack .../37-python3-comm_0.2.1-1_all.deb ... 452s Unpacking python3-comm (0.2.1-1) ... 452s Selecting previously unselected package python3-bytecode. 452s Preparing to unpack .../38-python3-bytecode_0.15.1-3_all.deb ... 452s Unpacking python3-bytecode (0.15.1-3) ... 452s Selecting previously unselected package python3-coverage. 452s Preparing to unpack .../39-python3-coverage_7.4.4+dfsg1-0ubuntu2_ppc64el.deb ... 452s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 452s Selecting previously unselected package python3-pydevd. 452s Preparing to unpack .../40-python3-pydevd_2.10.0+ds-10ubuntu1_ppc64el.deb ... 452s Unpacking python3-pydevd (2.10.0+ds-10ubuntu1) ... 452s Selecting previously unselected package python3-debugpy. 452s Preparing to unpack .../41-python3-debugpy_1.8.0+ds-4ubuntu4_all.deb ... 452s Unpacking python3-debugpy (1.8.0+ds-4ubuntu4) ... 452s Selecting previously unselected package python3-decorator. 452s Preparing to unpack .../42-python3-decorator_5.1.1-5_all.deb ... 452s Unpacking python3-decorator (5.1.1-5) ... 452s Selecting previously unselected package python3-parso. 452s Preparing to unpack .../43-python3-parso_0.8.3-1_all.deb ... 452s Unpacking python3-parso (0.8.3-1) ... 453s Selecting previously unselected package python3-typeshed. 453s Preparing to unpack .../44-python3-typeshed_0.0~git20231111.6764465-3_all.deb ... 453s Unpacking python3-typeshed (0.0~git20231111.6764465-3) ... 453s Selecting previously unselected package python3-jedi. 453s Preparing to unpack .../45-python3-jedi_0.19.1+ds1-1_all.deb ... 453s Unpacking python3-jedi (0.19.1+ds1-1) ... 454s Selecting previously unselected package python3-matplotlib-inline. 454s Preparing to unpack .../46-python3-matplotlib-inline_0.1.6-2_all.deb ... 454s Unpacking python3-matplotlib-inline (0.1.6-2) ... 454s Selecting previously unselected package python3-pexpect. 454s Preparing to unpack .../47-python3-pexpect_4.9-2_all.deb ... 454s Unpacking python3-pexpect (4.9-2) ... 454s Selecting previously unselected package python3-wcwidth. 454s Preparing to unpack .../48-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 454s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 454s Selecting previously unselected package python3-prompt-toolkit. 454s Preparing to unpack .../49-python3-prompt-toolkit_3.0.46-1_all.deb ... 454s Unpacking python3-prompt-toolkit (3.0.46-1) ... 454s Selecting previously unselected package python3-asttokens. 454s Preparing to unpack .../50-python3-asttokens_2.4.1-1_all.deb ... 454s Unpacking python3-asttokens (2.4.1-1) ... 454s Selecting previously unselected package python3-executing. 454s Preparing to unpack .../51-python3-executing_2.0.1-0.1_all.deb ... 454s Unpacking python3-executing (2.0.1-0.1) ... 454s Selecting previously unselected package python3-pure-eval. 454s Preparing to unpack .../52-python3-pure-eval_0.2.2-2_all.deb ... 454s Unpacking python3-pure-eval (0.2.2-2) ... 454s Selecting previously unselected package python3-stack-data. 454s Preparing to unpack .../53-python3-stack-data_0.6.3-1_all.deb ... 454s Unpacking python3-stack-data (0.6.3-1) ... 454s Selecting previously unselected package python3-ipython. 454s Preparing to unpack .../54-python3-ipython_8.20.0-1ubuntu1_all.deb ... 454s Unpacking python3-ipython (8.20.0-1ubuntu1) ... 454s Selecting previously unselected package python3-dateutil. 454s Preparing to unpack .../55-python3-dateutil_2.9.0-2_all.deb ... 454s Unpacking python3-dateutil (2.9.0-2) ... 454s Selecting previously unselected package python3-entrypoints. 454s Preparing to unpack .../56-python3-entrypoints_0.4-2_all.deb ... 454s Unpacking python3-entrypoints (0.4-2) ... 454s Selecting previously unselected package python3-nest-asyncio. 454s Preparing to unpack .../57-python3-nest-asyncio_1.5.4-1_all.deb ... 454s Unpacking python3-nest-asyncio (1.5.4-1) ... 454s Selecting previously unselected package python3-py. 454s Preparing to unpack .../58-python3-py_1.11.0-2_all.deb ... 454s Unpacking python3-py (1.11.0-2) ... 454s Selecting previously unselected package libnorm1t64:ppc64el. 454s Preparing to unpack .../59-libnorm1t64_1.5.9+dfsg-3.1build1_ppc64el.deb ... 454s Unpacking libnorm1t64:ppc64el (1.5.9+dfsg-3.1build1) ... 454s Selecting previously unselected package libpgm-5.3-0t64:ppc64el. 454s Preparing to unpack .../60-libpgm-5.3-0t64_5.3.128~dfsg-2.1build1_ppc64el.deb ... 454s Unpacking libpgm-5.3-0t64:ppc64el (5.3.128~dfsg-2.1build1) ... 454s Selecting previously unselected package libsodium23:ppc64el. 454s Preparing to unpack .../61-libsodium23_1.0.18-1build3_ppc64el.deb ... 454s Unpacking libsodium23:ppc64el (1.0.18-1build3) ... 454s Selecting previously unselected package libzmq5:ppc64el. 454s Preparing to unpack .../62-libzmq5_4.3.5-1build2_ppc64el.deb ... 454s Unpacking libzmq5:ppc64el (4.3.5-1build2) ... 454s Selecting previously unselected package python3-zmq. 454s Preparing to unpack .../63-python3-zmq_24.0.1-5build1_ppc64el.deb ... 454s Unpacking python3-zmq (24.0.1-5build1) ... 454s Selecting previously unselected package python3-jupyter-client. 454s Preparing to unpack .../64-python3-jupyter-client_7.4.9-2ubuntu1_all.deb ... 454s Unpacking python3-jupyter-client (7.4.9-2ubuntu1) ... 454s Selecting previously unselected package python3-packaging. 454s Preparing to unpack .../65-python3-packaging_24.0-1_all.deb ... 454s Unpacking python3-packaging (24.0-1) ... 454s Selecting previously unselected package python3-psutil. 454s Preparing to unpack .../66-python3-psutil_5.9.8-2build2_ppc64el.deb ... 454s Unpacking python3-psutil (5.9.8-2build2) ... 454s Selecting previously unselected package python3-ipykernel. 454s Preparing to unpack .../67-python3-ipykernel_6.29.3-1ubuntu1_all.deb ... 454s Unpacking python3-ipykernel (6.29.3-1ubuntu1) ... 454s Selecting previously unselected package python3-ipython-genutils. 454s Preparing to unpack .../68-python3-ipython-genutils_0.2.0-6_all.deb ... 454s Unpacking python3-ipython-genutils (0.2.0-6) ... 454s Selecting previously unselected package python3-webencodings. 454s Preparing to unpack .../69-python3-webencodings_0.5.1-5_all.deb ... 454s Unpacking python3-webencodings (0.5.1-5) ... 454s Selecting previously unselected package python3-html5lib. 454s Preparing to unpack .../70-python3-html5lib_1.1-6_all.deb ... 454s Unpacking python3-html5lib (1.1-6) ... 454s Selecting previously unselected package python3-bleach. 454s Preparing to unpack .../71-python3-bleach_6.1.0-2_all.deb ... 454s Unpacking python3-bleach (6.1.0-2) ... 454s Selecting previously unselected package python3-soupsieve. 454s Preparing to unpack .../72-python3-soupsieve_2.5-1_all.deb ... 454s Unpacking python3-soupsieve (2.5-1) ... 454s Selecting previously unselected package python3-bs4. 454s Preparing to unpack .../73-python3-bs4_4.12.3-1_all.deb ... 454s Unpacking python3-bs4 (4.12.3-1) ... 454s Selecting previously unselected package python3-defusedxml. 454s Preparing to unpack .../74-python3-defusedxml_0.7.1-2_all.deb ... 454s Unpacking python3-defusedxml (0.7.1-2) ... 454s Selecting previously unselected package python3-jupyterlab-pygments. 454s Preparing to unpack .../75-python3-jupyterlab-pygments_0.2.2-3_all.deb ... 454s Unpacking python3-jupyterlab-pygments (0.2.2-3) ... 454s Selecting previously unselected package python3-mistune. 454s Preparing to unpack .../76-python3-mistune_3.0.2-1_all.deb ... 454s Unpacking python3-mistune (3.0.2-1) ... 454s Selecting previously unselected package python3-fastjsonschema. 454s Preparing to unpack .../77-python3-fastjsonschema_2.19.1-1_all.deb ... 454s Unpacking python3-fastjsonschema (2.19.1-1) ... 454s Selecting previously unselected package python3-nbformat. 454s Preparing to unpack .../78-python3-nbformat_5.9.1-1_all.deb ... 454s Unpacking python3-nbformat (5.9.1-1) ... 454s Selecting previously unselected package python3-nbclient. 454s Preparing to unpack .../79-python3-nbclient_0.8.0-1_all.deb ... 454s Unpacking python3-nbclient (0.8.0-1) ... 454s Selecting previously unselected package python3-pandocfilters. 454s Preparing to unpack .../80-python3-pandocfilters_1.5.1-1_all.deb ... 454s Unpacking python3-pandocfilters (1.5.1-1) ... 454s Selecting previously unselected package python-tinycss2-common. 454s Preparing to unpack .../81-python-tinycss2-common_1.3.0-1_all.deb ... 454s Unpacking python-tinycss2-common (1.3.0-1) ... 454s Selecting previously unselected package python3-tinycss2. 454s Preparing to unpack .../82-python3-tinycss2_1.3.0-1_all.deb ... 454s Unpacking python3-tinycss2 (1.3.0-1) ... 454s Selecting previously unselected package python3-nbconvert. 454s Preparing to unpack .../83-python3-nbconvert_7.16.4-1_all.deb ... 454s Unpacking python3-nbconvert (7.16.4-1) ... 454s Selecting previously unselected package python3-prometheus-client. 454s Preparing to unpack .../84-python3-prometheus-client_0.19.0+ds1-1_all.deb ... 454s Unpacking python3-prometheus-client (0.19.0+ds1-1) ... 455s Selecting previously unselected package python3-send2trash. 455s Preparing to unpack .../85-python3-send2trash_1.8.2-1_all.deb ... 455s Unpacking python3-send2trash (1.8.2-1) ... 455s Selecting previously unselected package python3-notebook. 455s Preparing to unpack .../86-python3-notebook_6.4.12-2.2ubuntu1_all.deb ... 455s Unpacking python3-notebook (6.4.12-2.2ubuntu1) ... 455s Selecting previously unselected package jupyter-notebook. 455s Preparing to unpack .../87-jupyter-notebook_6.4.12-2.2ubuntu1_all.deb ... 455s Unpacking jupyter-notebook (6.4.12-2.2ubuntu1) ... 455s Selecting previously unselected package libjs-sphinxdoc. 455s Preparing to unpack .../88-libjs-sphinxdoc_7.2.6-8_all.deb ... 455s Unpacking libjs-sphinxdoc (7.2.6-8) ... 455s Selecting previously unselected package sphinx-rtd-theme-common. 455s Preparing to unpack .../89-sphinx-rtd-theme-common_2.0.0+dfsg-1_all.deb ... 455s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 455s Selecting previously unselected package python-notebook-doc. 455s Preparing to unpack .../90-python-notebook-doc_6.4.12-2.2ubuntu1_all.deb ... 455s Unpacking python-notebook-doc (6.4.12-2.2ubuntu1) ... 455s Selecting previously unselected package autopkgtest-satdep. 455s Preparing to unpack .../91-2-autopkgtest-satdep.deb ... 455s Unpacking autopkgtest-satdep (0) ... 455s Setting up python3-entrypoints (0.4-2) ... 455s Setting up libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 455s Setting up python3-tornado (6.4.1-1) ... 456s Setting up libnorm1t64:ppc64el (1.5.9+dfsg-3.1build1) ... 456s Setting up python3-pure-eval (0.2.2-2) ... 456s Setting up python3-send2trash (1.8.2-1) ... 456s Setting up fonts-lato (2.015-1) ... 456s Setting up fonts-mathjax (2.7.9+dfsg-1) ... 456s Setting up libsodium23:ppc64el (1.0.18-1build3) ... 456s Setting up libjs-mathjax (2.7.9+dfsg-1) ... 456s Setting up python3-py (1.11.0-2) ... 456s Setting up libdebuginfod-common (0.191-1) ... 456s Setting up libjs-requirejs-text (2.0.12-1.1) ... 456s Setting up python3-parso (0.8.3-1) ... 456s Setting up python3-defusedxml (0.7.1-2) ... 457s Setting up python3-ipython-genutils (0.2.0-6) ... 457s Setting up python3-asttokens (2.4.1-1) ... 457s Setting up fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 457s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 457s Setting up libjs-moment (2.29.4+ds-1) ... 457s Setting up python3-pandocfilters (1.5.1-1) ... 457s Setting up libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 457s Setting up libjs-es6-promise (4.2.8-12) ... 457s Setting up libjs-text-encoding (0.7.0-5) ... 457s Setting up python3-webencodings (0.5.1-5) ... 457s Setting up python3-platformdirs (4.2.1-1) ... 458s Setting up python3-psutil (5.9.8-2build2) ... 458s Setting up libsource-highlight-common (3.1.9-4.3build1) ... 458s Setting up python3-jupyterlab-pygments (0.2.2-3) ... 458s Setting up libpython3.12t64:ppc64el (3.12.4-1) ... 458s Setting up libpgm-5.3-0t64:ppc64el (5.3.128~dfsg-2.1build1) ... 458s Setting up python3-decorator (5.1.1-5) ... 458s Setting up python3-packaging (24.0-1) ... 458s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 459s Setting up node-jed (1.1.1-4) ... 459s Setting up python3-typeshed (0.0~git20231111.6764465-3) ... 459s Setting up python3-executing (2.0.1-0.1) ... 459s Setting up libjs-xterm (5.3.0-2) ... 459s Setting up python3-nest-asyncio (1.5.4-1) ... 459s Setting up python3-bytecode (0.15.1-3) ... 459s Setting up libjs-codemirror (5.65.0+~cs5.83.9-3) ... 459s Setting up libjs-jed (1.1.1-4) ... 459s Setting up python3-html5lib (1.1-6) ... 459s Setting up libbabeltrace1:ppc64el (1.5.11-3build3) ... 459s Setting up python3-fastjsonschema (2.19.1-1) ... 459s Setting up python3-traitlets (5.14.3-1) ... 460s Setting up python-tinycss2-common (1.3.0-1) ... 460s Setting up python3-argon2 (21.1.0-2build1) ... 460s Setting up python3-dateutil (2.9.0-2) ... 460s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 460s Setting up python3-mistune (3.0.2-1) ... 460s Setting up python3-stack-data (0.6.3-1) ... 460s Setting up python3-soupsieve (2.5-1) ... 460s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 460s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 460s Setting up python3-jupyter-core (5.3.2-2) ... 461s Setting up libjs-bootstrap (3.4.1+dfsg-3) ... 461s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 461s Setting up python3-ptyprocess (0.7.0-5) ... 461s Setting up libjs-marked (4.2.3+ds+~4.0.7-3) ... 461s Setting up python3-prompt-toolkit (3.0.46-1) ... 461s Setting up libdebuginfod1t64:ppc64el (0.191-1) ... 461s Setting up python3-tinycss2 (1.3.0-1) ... 461s Setting up libzmq5:ppc64el (4.3.5-1build2) ... 461s Setting up python3-jedi (0.19.1+ds1-1) ... 462s Setting up libjs-bootstrap-tour (0.12.0+dfsg-5) ... 462s Setting up libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 462s Setting up libsource-highlight4t64:ppc64el (3.1.9-4.3build1) ... 462s Setting up python3-nbformat (5.9.1-1) ... 462s Setting up python3-bs4 (4.12.3-1) ... 462s Setting up python3-bleach (6.1.0-2) ... 462s Setting up python3-matplotlib-inline (0.1.6-2) ... 462s Setting up python3-comm (0.2.1-1) ... 462s Setting up python3-prometheus-client (0.19.0+ds1-1) ... 463s Setting up gdb (15.0.50.20240403-0ubuntu1) ... 463s Setting up libjs-jquery-ui (1.13.2+dfsg-1) ... 463s Setting up python3-pexpect (4.9-2) ... 463s Setting up python3-zmq (24.0.1-5build1) ... 463s Setting up libjs-sphinxdoc (7.2.6-8) ... 463s Setting up python3-terminado (0.18.1-1) ... 463s Setting up python3-jupyter-client (7.4.9-2ubuntu1) ... 464s Setting up jupyter-core (5.3.2-2) ... 464s Setting up python3-pydevd (2.10.0+ds-10ubuntu1) ... 464s Setting up python3-debugpy (1.8.0+ds-4ubuntu4) ... 464s Setting up python-notebook-doc (6.4.12-2.2ubuntu1) ... 464s Setting up python3-nbclient (0.8.0-1) ... 465s Setting up python3-ipython (8.20.0-1ubuntu1) ... 465s Setting up python3-ipykernel (6.29.3-1ubuntu1) ... 465s Setting up python3-nbconvert (7.16.4-1) ... 466s Setting up python3-notebook (6.4.12-2.2ubuntu1) ... 466s Setting up jupyter-notebook (6.4.12-2.2ubuntu1) ... 466s Setting up autopkgtest-satdep (0) ... 466s Processing triggers for man-db (2.12.1-2) ... 467s Processing triggers for libc-bin (2.39-0ubuntu9) ... 471s (Reading database ... 89138 files and directories currently installed.) 471s Removing autopkgtest-satdep (0) ... 473s autopkgtest [10:33:24]: test command1: find /usr/lib/python3/dist-packages/notebook -xtype l >&2 473s autopkgtest [10:33:24]: test command1: [----------------------- 473s autopkgtest [10:33:24]: test command1: -----------------------] 474s command1 PASS (superficial) 474s autopkgtest [10:33:25]: test command1: - - - - - - - - - - results - - - - - - - - - - 474s autopkgtest [10:33:25]: test autodep8-python3: preparing testbed 618s autopkgtest [10:35:49]: testbed dpkg architecture: ppc64el 618s autopkgtest [10:35:49]: testbed apt version: 2.9.5 618s autopkgtest [10:35:49]: @@@@@@@@@@@@@@@@@@@@ test bed setup 619s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [110 kB] 619s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [36.1 kB] 619s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [389 kB] 619s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [7052 B] 619s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [2576 B] 619s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main ppc64el Packages [42.8 kB] 619s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/restricted ppc64el Packages [1860 B] 619s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/universe ppc64el Packages [312 kB] 619s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse ppc64el Packages [2532 B] 619s Fetched 905 kB in 1s (1061 kB/s) 619s Reading package lists... 622s Reading package lists... 622s Building dependency tree... 622s Reading state information... 622s Calculating upgrade... 622s The following packages will be upgraded: 622s libldap-common libldap2 622s 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 622s Need to get 262 kB of archives. 622s After this operation, 0 B of additional disk space will be used. 622s Get:1 http://ftpmaster.internal/ubuntu oracular/main ppc64el libldap-common all 2.6.7+dfsg-1~exp1ubuntu9 [31.5 kB] 623s Get:2 http://ftpmaster.internal/ubuntu oracular/main ppc64el libldap2 ppc64el 2.6.7+dfsg-1~exp1ubuntu9 [231 kB] 623s Fetched 262 kB in 0s (606 kB/s) 623s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 72676 files and directories currently installed.) 623s Preparing to unpack .../libldap-common_2.6.7+dfsg-1~exp1ubuntu9_all.deb ... 623s Unpacking libldap-common (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 623s Preparing to unpack .../libldap2_2.6.7+dfsg-1~exp1ubuntu9_ppc64el.deb ... 623s Unpacking libldap2:ppc64el (2.6.7+dfsg-1~exp1ubuntu9) over (2.6.7+dfsg-1~exp1ubuntu8) ... 623s Setting up libldap-common (2.6.7+dfsg-1~exp1ubuntu9) ... 623s Setting up libldap2:ppc64el (2.6.7+dfsg-1~exp1ubuntu9) ... 623s Processing triggers for man-db (2.12.1-2) ... 623s Processing triggers for libc-bin (2.39-0ubuntu9) ... 624s Reading package lists... 624s Building dependency tree... 624s Reading state information... 624s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 624s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 625s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 625s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 625s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 626s Reading package lists... 626s Reading package lists... 626s Building dependency tree... 626s Reading state information... 626s Calculating upgrade... 627s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 627s Reading package lists... 627s Building dependency tree... 627s Reading state information... 627s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 631s Reading package lists... 631s Building dependency tree... 631s Reading state information... 631s Starting pkgProblemResolver with broken count: 0 631s Starting 2 pkgProblemResolver with broken count: 0 631s Done 632s The following additional packages will be installed: 632s fonts-font-awesome fonts-glyphicons-halflings fonts-mathjax gdb 632s libbabeltrace1 libdebuginfod-common libdebuginfod1t64 libjs-backbone 632s libjs-bootstrap libjs-bootstrap-tour libjs-codemirror libjs-es6-promise 632s libjs-jed libjs-jquery libjs-jquery-typeahead libjs-jquery-ui libjs-marked 632s libjs-mathjax libjs-moment libjs-requirejs libjs-requirejs-text 632s libjs-text-encoding libjs-underscore libjs-xterm libnorm1t64 libpgm-5.3-0t64 632s libpython3.12t64 libsodium23 libsource-highlight-common 632s libsource-highlight4t64 libzmq5 node-jed python-tinycss2-common python3-all 632s python3-argon2 python3-asttokens python3-bleach python3-bs4 python3-bytecode 632s python3-comm python3-coverage python3-dateutil python3-debugpy 632s python3-decorator python3-defusedxml python3-entrypoints python3-executing 632s python3-fastjsonschema python3-html5lib python3-ipykernel python3-ipython 632s python3-ipython-genutils python3-jedi python3-jupyter-client 632s python3-jupyter-core python3-jupyterlab-pygments python3-matplotlib-inline 632s python3-mistune python3-nbclient python3-nbconvert python3-nbformat 632s python3-nest-asyncio python3-notebook python3-packaging 632s python3-pandocfilters python3-parso python3-pexpect python3-platformdirs 632s python3-prometheus-client python3-prompt-toolkit python3-psutil 632s python3-ptyprocess python3-pure-eval python3-py python3-pydevd 632s python3-send2trash python3-soupsieve python3-stack-data python3-terminado 632s python3-tinycss2 python3-tornado python3-traitlets python3-typeshed 632s python3-wcwidth python3-webencodings python3-zmq 632s Suggested packages: 632s gdb-doc gdbserver libjs-jquery-lazyload libjs-json libjs-jquery-ui-docs 632s fonts-mathjax-extras fonts-stix libjs-mathjax-doc python-argon2-doc 632s python-bleach-doc python-bytecode-doc python-coverage-doc 632s python-fastjsonschema-doc python3-genshi python3-lxml python-ipython-doc 632s python3-pip python-nbconvert-doc texlive-fonts-recommended 632s texlive-plain-generic texlive-xetex python-notebook-doc python-pexpect-doc 632s subversion python3-pytest pydevd python-terminado-doc python-tinycss2-doc 632s python3-pycurl python-tornado-doc python3-twisted 632s Recommended packages: 632s libc-dbg javascript-common python3-lxml python3-matplotlib pandoc 632s python3-ipywidgets 632s The following NEW packages will be installed: 632s autopkgtest-satdep fonts-font-awesome fonts-glyphicons-halflings 632s fonts-mathjax gdb libbabeltrace1 libdebuginfod-common libdebuginfod1t64 632s libjs-backbone libjs-bootstrap libjs-bootstrap-tour libjs-codemirror 632s libjs-es6-promise libjs-jed libjs-jquery libjs-jquery-typeahead 632s libjs-jquery-ui libjs-marked libjs-mathjax libjs-moment libjs-requirejs 632s libjs-requirejs-text libjs-text-encoding libjs-underscore libjs-xterm 632s libnorm1t64 libpgm-5.3-0t64 libpython3.12t64 libsodium23 632s libsource-highlight-common libsource-highlight4t64 libzmq5 node-jed 632s python-tinycss2-common python3-all python3-argon2 python3-asttokens 632s python3-bleach python3-bs4 python3-bytecode python3-comm python3-coverage 632s python3-dateutil python3-debugpy python3-decorator python3-defusedxml 632s python3-entrypoints python3-executing python3-fastjsonschema 632s python3-html5lib python3-ipykernel python3-ipython python3-ipython-genutils 632s python3-jedi python3-jupyter-client python3-jupyter-core 632s python3-jupyterlab-pygments python3-matplotlib-inline python3-mistune 632s python3-nbclient python3-nbconvert python3-nbformat python3-nest-asyncio 632s python3-notebook python3-packaging python3-pandocfilters python3-parso 632s python3-pexpect python3-platformdirs python3-prometheus-client 632s python3-prompt-toolkit python3-psutil python3-ptyprocess python3-pure-eval 632s python3-py python3-pydevd python3-send2trash python3-soupsieve 632s python3-stack-data python3-terminado python3-tinycss2 python3-tornado 632s python3-traitlets python3-typeshed python3-wcwidth python3-webencodings 632s python3-zmq 632s 0 upgraded, 87 newly installed, 0 to remove and 0 not upgraded. 632s Need to get 28.1 MB/28.1 MB of archives. 632s After this operation, 163 MB of additional disk space will be used. 632s Get:1 /tmp/autopkgtest.E327Mm/3-autopkgtest-satdep.deb autopkgtest-satdep ppc64el 0 [716 B] 632s Get:2 http://ftpmaster.internal/ubuntu oracular/main ppc64el libdebuginfod-common all 0.191-1 [14.6 kB] 632s Get:3 http://ftpmaster.internal/ubuntu oracular/main ppc64el fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 632s Get:4 http://ftpmaster.internal/ubuntu oracular/universe ppc64el fonts-glyphicons-halflings all 1.009~3.4.1+dfsg-3 [118 kB] 632s Get:5 http://ftpmaster.internal/ubuntu oracular/main ppc64el fonts-mathjax all 2.7.9+dfsg-1 [2208 kB] 633s Get:6 http://ftpmaster.internal/ubuntu oracular/main ppc64el libbabeltrace1 ppc64el 1.5.11-3build3 [209 kB] 633s Get:7 http://ftpmaster.internal/ubuntu oracular/main ppc64el libdebuginfod1t64 ppc64el 0.191-1 [18.4 kB] 633s Get:8 http://ftpmaster.internal/ubuntu oracular/main ppc64el libpython3.12t64 ppc64el 3.12.4-1 [2542 kB] 633s Get:9 http://ftpmaster.internal/ubuntu oracular/main ppc64el libsource-highlight-common all 3.1.9-4.3build1 [64.2 kB] 633s Get:10 http://ftpmaster.internal/ubuntu oracular/main ppc64el libsource-highlight4t64 ppc64el 3.1.9-4.3build1 [288 kB] 633s Get:11 http://ftpmaster.internal/ubuntu oracular/main ppc64el gdb ppc64el 15.0.50.20240403-0ubuntu1 [5088 kB] 633s Get:12 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 633s Get:13 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-backbone all 1.4.1~dfsg+~1.4.15-3 [185 kB] 633s Get:14 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-bootstrap all 3.4.1+dfsg-3 [129 kB] 633s Get:15 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 633s Get:16 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-bootstrap-tour all 0.12.0+dfsg-5 [21.4 kB] 633s Get:17 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-es6-promise all 4.2.8-12 [14.1 kB] 633s Get:18 http://ftpmaster.internal/ubuntu oracular/universe ppc64el node-jed all 1.1.1-4 [15.2 kB] 633s Get:19 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-jed all 1.1.1-4 [2584 B] 633s Get:20 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-jquery-typeahead all 2.11.0+dfsg1-3 [48.9 kB] 633s Get:21 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-jquery-ui all 1.13.2+dfsg-1 [252 kB] 633s Get:22 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-moment all 2.29.4+ds-1 [147 kB] 633s Get:23 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-text-encoding all 0.7.0-5 [140 kB] 633s Get:24 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-xterm all 5.3.0-2 [476 kB] 633s Get:25 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libnorm1t64 ppc64el 1.5.9+dfsg-3.1build1 [194 kB] 633s Get:26 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libpgm-5.3-0t64 ppc64el 5.3.128~dfsg-2.1build1 [185 kB] 633s Get:27 http://ftpmaster.internal/ubuntu oracular/main ppc64el libsodium23 ppc64el 1.0.18-1build3 [150 kB] 633s Get:28 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libzmq5 ppc64el 4.3.5-1build2 [297 kB] 633s Get:29 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python-tinycss2-common all 1.3.0-1 [34.1 kB] 633s Get:30 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-all ppc64el 3.12.3-0ubuntu1 [888 B] 633s Get:31 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-argon2 ppc64el 21.1.0-2build1 [21.7 kB] 633s Get:32 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-asttokens all 2.4.1-1 [20.9 kB] 633s Get:33 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-webencodings all 0.5.1-5 [11.5 kB] 633s Get:34 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-html5lib all 1.1-6 [88.8 kB] 633s Get:35 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-bleach all 6.1.0-2 [49.6 kB] 633s Get:36 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-soupsieve all 2.5-1 [33.0 kB] 633s Get:37 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-bs4 all 4.12.3-1 [109 kB] 633s Get:38 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-bytecode all 0.15.1-3 [44.7 kB] 633s Get:39 http://ftpmaster.internal/ubuntu oracular-proposed/universe ppc64el python3-traitlets all 5.14.3-1 [71.3 kB] 633s Get:40 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-comm all 0.2.1-1 [7016 B] 633s Get:41 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-coverage ppc64el 7.4.4+dfsg1-0ubuntu2 [149 kB] 633s Get:42 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-dateutil all 2.9.0-2 [80.3 kB] 633s Get:43 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pydevd ppc64el 2.10.0+ds-10ubuntu1 [655 kB] 633s Get:44 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-debugpy all 1.8.0+ds-4ubuntu4 [67.6 kB] 633s Get:45 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-decorator all 5.1.1-5 [10.1 kB] 633s Get:46 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-defusedxml all 0.7.1-2 [42.0 kB] 633s Get:47 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-entrypoints all 0.4-2 [7146 B] 633s Get:48 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-executing all 2.0.1-0.1 [23.3 kB] 633s Get:49 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-fastjsonschema all 2.19.1-1 [19.7 kB] 633s Get:50 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-parso all 0.8.3-1 [67.2 kB] 633s Get:51 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-typeshed all 0.0~git20231111.6764465-3 [1274 kB] 633s Get:52 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jedi all 0.19.1+ds1-1 [693 kB] 633s Get:53 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-matplotlib-inline all 0.1.6-2 [8784 B] 633s Get:54 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-ptyprocess all 0.7.0-5 [15.1 kB] 633s Get:55 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-pexpect all 4.9-2 [48.1 kB] 633s Get:56 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 633s Get:57 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-prompt-toolkit all 3.0.46-1 [256 kB] 633s Get:58 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pure-eval all 0.2.2-2 [11.1 kB] 633s Get:59 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-stack-data all 0.6.3-1 [22.0 kB] 633s Get:60 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-ipython all 8.20.0-1ubuntu1 [561 kB] 633s Get:61 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-platformdirs all 4.2.1-1 [16.3 kB] 633s Get:62 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jupyter-core all 5.3.2-2 [25.5 kB] 633s Get:63 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nest-asyncio all 1.5.4-1 [6256 B] 633s Get:64 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-tornado ppc64el 6.4.1-1 [298 kB] 633s Get:65 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-py all 1.11.0-2 [72.7 kB] 633s Get:66 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-zmq ppc64el 24.0.1-5build1 [316 kB] 633s Get:67 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jupyter-client all 7.4.9-2ubuntu1 [90.5 kB] 633s Get:68 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-packaging all 24.0-1 [41.1 kB] 633s Get:69 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-psutil ppc64el 5.9.8-2build2 [197 kB] 633s Get:70 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-ipykernel all 6.29.3-1ubuntu1 [82.6 kB] 633s Get:71 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-ipython-genutils all 0.2.0-6 [22.0 kB] 633s Get:72 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-jupyterlab-pygments all 0.2.2-3 [6054 B] 633s Get:73 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-mistune all 3.0.2-1 [32.8 kB] 633s Get:74 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nbformat all 5.9.1-1 [41.2 kB] 633s Get:75 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nbclient all 0.8.0-1 [55.6 kB] 633s Get:76 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-pandocfilters all 1.5.1-1 [23.6 kB] 633s Get:77 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-tinycss2 all 1.3.0-1 [19.6 kB] 634s Get:78 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-nbconvert all 7.16.4-1 [156 kB] 634s Get:79 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-codemirror all 5.65.0+~cs5.83.9-3 [755 kB] 634s Get:80 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-marked all 4.2.3+ds+~4.0.7-3 [36.2 kB] 634s Get:81 http://ftpmaster.internal/ubuntu oracular/main ppc64el libjs-mathjax all 2.7.9+dfsg-1 [5665 kB] 634s Get:82 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-requirejs all 2.3.6+ds+~2.1.37-1 [201 kB] 634s Get:83 http://ftpmaster.internal/ubuntu oracular/universe ppc64el libjs-requirejs-text all 2.0.12-1.1 [9056 B] 634s Get:84 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-terminado all 0.18.1-1 [13.2 kB] 634s Get:85 http://ftpmaster.internal/ubuntu oracular/main ppc64el python3-prometheus-client all 0.19.0+ds1-1 [41.7 kB] 634s Get:86 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-send2trash all 1.8.2-1 [15.5 kB] 634s Get:87 http://ftpmaster.internal/ubuntu oracular/universe ppc64el python3-notebook all 6.4.12-2.2ubuntu1 [1566 kB] 634s Preconfiguring packages ... 634s Fetched 28.1 MB in 2s (14.1 MB/s) 634s Selecting previously unselected package libdebuginfod-common. 634s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 72676 files and directories currently installed.) 634s Preparing to unpack .../00-libdebuginfod-common_0.191-1_all.deb ... 634s Unpacking libdebuginfod-common (0.191-1) ... 634s Selecting previously unselected package fonts-font-awesome. 634s Preparing to unpack .../01-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 634s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 634s Selecting previously unselected package fonts-glyphicons-halflings. 634s Preparing to unpack .../02-fonts-glyphicons-halflings_1.009~3.4.1+dfsg-3_all.deb ... 634s Unpacking fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 634s Selecting previously unselected package fonts-mathjax. 634s Preparing to unpack .../03-fonts-mathjax_2.7.9+dfsg-1_all.deb ... 634s Unpacking fonts-mathjax (2.7.9+dfsg-1) ... 635s Selecting previously unselected package libbabeltrace1:ppc64el. 635s Preparing to unpack .../04-libbabeltrace1_1.5.11-3build3_ppc64el.deb ... 635s Unpacking libbabeltrace1:ppc64el (1.5.11-3build3) ... 635s Selecting previously unselected package libdebuginfod1t64:ppc64el. 635s Preparing to unpack .../05-libdebuginfod1t64_0.191-1_ppc64el.deb ... 635s Unpacking libdebuginfod1t64:ppc64el (0.191-1) ... 635s Selecting previously unselected package libpython3.12t64:ppc64el. 635s Preparing to unpack .../06-libpython3.12t64_3.12.4-1_ppc64el.deb ... 635s Unpacking libpython3.12t64:ppc64el (3.12.4-1) ... 635s Selecting previously unselected package libsource-highlight-common. 635s Preparing to unpack .../07-libsource-highlight-common_3.1.9-4.3build1_all.deb ... 635s Unpacking libsource-highlight-common (3.1.9-4.3build1) ... 635s Selecting previously unselected package libsource-highlight4t64:ppc64el. 635s Preparing to unpack .../08-libsource-highlight4t64_3.1.9-4.3build1_ppc64el.deb ... 635s Unpacking libsource-highlight4t64:ppc64el (3.1.9-4.3build1) ... 635s Selecting previously unselected package gdb. 635s Preparing to unpack .../09-gdb_15.0.50.20240403-0ubuntu1_ppc64el.deb ... 635s Unpacking gdb (15.0.50.20240403-0ubuntu1) ... 635s Selecting previously unselected package libjs-underscore. 635s Preparing to unpack .../10-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 635s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 635s Selecting previously unselected package libjs-backbone. 635s Preparing to unpack .../11-libjs-backbone_1.4.1~dfsg+~1.4.15-3_all.deb ... 635s Unpacking libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 635s Selecting previously unselected package libjs-bootstrap. 635s Preparing to unpack .../12-libjs-bootstrap_3.4.1+dfsg-3_all.deb ... 635s Unpacking libjs-bootstrap (3.4.1+dfsg-3) ... 635s Selecting previously unselected package libjs-jquery. 635s Preparing to unpack .../13-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 635s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 635s Selecting previously unselected package libjs-bootstrap-tour. 635s Preparing to unpack .../14-libjs-bootstrap-tour_0.12.0+dfsg-5_all.deb ... 635s Unpacking libjs-bootstrap-tour (0.12.0+dfsg-5) ... 635s Selecting previously unselected package libjs-es6-promise. 635s Preparing to unpack .../15-libjs-es6-promise_4.2.8-12_all.deb ... 635s Unpacking libjs-es6-promise (4.2.8-12) ... 635s Selecting previously unselected package node-jed. 635s Preparing to unpack .../16-node-jed_1.1.1-4_all.deb ... 635s Unpacking node-jed (1.1.1-4) ... 635s Selecting previously unselected package libjs-jed. 635s Preparing to unpack .../17-libjs-jed_1.1.1-4_all.deb ... 635s Unpacking libjs-jed (1.1.1-4) ... 635s Selecting previously unselected package libjs-jquery-typeahead. 635s Preparing to unpack .../18-libjs-jquery-typeahead_2.11.0+dfsg1-3_all.deb ... 635s Unpacking libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 635s Selecting previously unselected package libjs-jquery-ui. 635s Preparing to unpack .../19-libjs-jquery-ui_1.13.2+dfsg-1_all.deb ... 635s Unpacking libjs-jquery-ui (1.13.2+dfsg-1) ... 635s Selecting previously unselected package libjs-moment. 635s Preparing to unpack .../20-libjs-moment_2.29.4+ds-1_all.deb ... 635s Unpacking libjs-moment (2.29.4+ds-1) ... 635s Selecting previously unselected package libjs-text-encoding. 635s Preparing to unpack .../21-libjs-text-encoding_0.7.0-5_all.deb ... 635s Unpacking libjs-text-encoding (0.7.0-5) ... 635s Selecting previously unselected package libjs-xterm. 635s Preparing to unpack .../22-libjs-xterm_5.3.0-2_all.deb ... 635s Unpacking libjs-xterm (5.3.0-2) ... 635s Selecting previously unselected package libnorm1t64:ppc64el. 635s Preparing to unpack .../23-libnorm1t64_1.5.9+dfsg-3.1build1_ppc64el.deb ... 635s Unpacking libnorm1t64:ppc64el (1.5.9+dfsg-3.1build1) ... 635s Selecting previously unselected package libpgm-5.3-0t64:ppc64el. 635s Preparing to unpack .../24-libpgm-5.3-0t64_5.3.128~dfsg-2.1build1_ppc64el.deb ... 635s Unpacking libpgm-5.3-0t64:ppc64el (5.3.128~dfsg-2.1build1) ... 635s Selecting previously unselected package libsodium23:ppc64el. 635s Preparing to unpack .../25-libsodium23_1.0.18-1build3_ppc64el.deb ... 635s Unpacking libsodium23:ppc64el (1.0.18-1build3) ... 635s Selecting previously unselected package libzmq5:ppc64el. 635s Preparing to unpack .../26-libzmq5_4.3.5-1build2_ppc64el.deb ... 635s Unpacking libzmq5:ppc64el (4.3.5-1build2) ... 635s Selecting previously unselected package python-tinycss2-common. 635s Preparing to unpack .../27-python-tinycss2-common_1.3.0-1_all.deb ... 635s Unpacking python-tinycss2-common (1.3.0-1) ... 635s Selecting previously unselected package python3-all. 635s Preparing to unpack .../28-python3-all_3.12.3-0ubuntu1_ppc64el.deb ... 635s Unpacking python3-all (3.12.3-0ubuntu1) ... 635s Selecting previously unselected package python3-argon2. 635s Preparing to unpack .../29-python3-argon2_21.1.0-2build1_ppc64el.deb ... 635s Unpacking python3-argon2 (21.1.0-2build1) ... 635s Selecting previously unselected package python3-asttokens. 635s Preparing to unpack .../30-python3-asttokens_2.4.1-1_all.deb ... 635s Unpacking python3-asttokens (2.4.1-1) ... 635s Selecting previously unselected package python3-webencodings. 635s Preparing to unpack .../31-python3-webencodings_0.5.1-5_all.deb ... 635s Unpacking python3-webencodings (0.5.1-5) ... 635s Selecting previously unselected package python3-html5lib. 635s Preparing to unpack .../32-python3-html5lib_1.1-6_all.deb ... 635s Unpacking python3-html5lib (1.1-6) ... 635s Selecting previously unselected package python3-bleach. 635s Preparing to unpack .../33-python3-bleach_6.1.0-2_all.deb ... 635s Unpacking python3-bleach (6.1.0-2) ... 635s Selecting previously unselected package python3-soupsieve. 635s Preparing to unpack .../34-python3-soupsieve_2.5-1_all.deb ... 635s Unpacking python3-soupsieve (2.5-1) ... 635s Selecting previously unselected package python3-bs4. 635s Preparing to unpack .../35-python3-bs4_4.12.3-1_all.deb ... 635s Unpacking python3-bs4 (4.12.3-1) ... 635s Selecting previously unselected package python3-bytecode. 635s Preparing to unpack .../36-python3-bytecode_0.15.1-3_all.deb ... 635s Unpacking python3-bytecode (0.15.1-3) ... 635s Selecting previously unselected package python3-traitlets. 635s Preparing to unpack .../37-python3-traitlets_5.14.3-1_all.deb ... 635s Unpacking python3-traitlets (5.14.3-1) ... 635s Selecting previously unselected package python3-comm. 635s Preparing to unpack .../38-python3-comm_0.2.1-1_all.deb ... 635s Unpacking python3-comm (0.2.1-1) ... 635s Selecting previously unselected package python3-coverage. 635s Preparing to unpack .../39-python3-coverage_7.4.4+dfsg1-0ubuntu2_ppc64el.deb ... 635s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 635s Selecting previously unselected package python3-dateutil. 635s Preparing to unpack .../40-python3-dateutil_2.9.0-2_all.deb ... 635s Unpacking python3-dateutil (2.9.0-2) ... 635s Selecting previously unselected package python3-pydevd. 636s Preparing to unpack .../41-python3-pydevd_2.10.0+ds-10ubuntu1_ppc64el.deb ... 636s Unpacking python3-pydevd (2.10.0+ds-10ubuntu1) ... 636s Selecting previously unselected package python3-debugpy. 636s Preparing to unpack .../42-python3-debugpy_1.8.0+ds-4ubuntu4_all.deb ... 636s Unpacking python3-debugpy (1.8.0+ds-4ubuntu4) ... 636s Selecting previously unselected package python3-decorator. 636s Preparing to unpack .../43-python3-decorator_5.1.1-5_all.deb ... 636s Unpacking python3-decorator (5.1.1-5) ... 636s Selecting previously unselected package python3-defusedxml. 636s Preparing to unpack .../44-python3-defusedxml_0.7.1-2_all.deb ... 636s Unpacking python3-defusedxml (0.7.1-2) ... 636s Selecting previously unselected package python3-entrypoints. 636s Preparing to unpack .../45-python3-entrypoints_0.4-2_all.deb ... 636s Unpacking python3-entrypoints (0.4-2) ... 636s Selecting previously unselected package python3-executing. 636s Preparing to unpack .../46-python3-executing_2.0.1-0.1_all.deb ... 636s Unpacking python3-executing (2.0.1-0.1) ... 636s Selecting previously unselected package python3-fastjsonschema. 636s Preparing to unpack .../47-python3-fastjsonschema_2.19.1-1_all.deb ... 636s Unpacking python3-fastjsonschema (2.19.1-1) ... 636s Selecting previously unselected package python3-parso. 636s Preparing to unpack .../48-python3-parso_0.8.3-1_all.deb ... 636s Unpacking python3-parso (0.8.3-1) ... 636s Selecting previously unselected package python3-typeshed. 636s Preparing to unpack .../49-python3-typeshed_0.0~git20231111.6764465-3_all.deb ... 636s Unpacking python3-typeshed (0.0~git20231111.6764465-3) ... 636s Selecting previously unselected package python3-jedi. 636s Preparing to unpack .../50-python3-jedi_0.19.1+ds1-1_all.deb ... 636s Unpacking python3-jedi (0.19.1+ds1-1) ... 637s Selecting previously unselected package python3-matplotlib-inline. 637s Preparing to unpack .../51-python3-matplotlib-inline_0.1.6-2_all.deb ... 637s Unpacking python3-matplotlib-inline (0.1.6-2) ... 637s Selecting previously unselected package python3-ptyprocess. 637s Preparing to unpack .../52-python3-ptyprocess_0.7.0-5_all.deb ... 637s Unpacking python3-ptyprocess (0.7.0-5) ... 637s Selecting previously unselected package python3-pexpect. 637s Preparing to unpack .../53-python3-pexpect_4.9-2_all.deb ... 637s Unpacking python3-pexpect (4.9-2) ... 637s Selecting previously unselected package python3-wcwidth. 637s Preparing to unpack .../54-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 637s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 637s Selecting previously unselected package python3-prompt-toolkit. 637s Preparing to unpack .../55-python3-prompt-toolkit_3.0.46-1_all.deb ... 637s Unpacking python3-prompt-toolkit (3.0.46-1) ... 637s Selecting previously unselected package python3-pure-eval. 637s Preparing to unpack .../56-python3-pure-eval_0.2.2-2_all.deb ... 637s Unpacking python3-pure-eval (0.2.2-2) ... 637s Selecting previously unselected package python3-stack-data. 637s Preparing to unpack .../57-python3-stack-data_0.6.3-1_all.deb ... 637s Unpacking python3-stack-data (0.6.3-1) ... 637s Selecting previously unselected package python3-ipython. 637s Preparing to unpack .../58-python3-ipython_8.20.0-1ubuntu1_all.deb ... 637s Unpacking python3-ipython (8.20.0-1ubuntu1) ... 637s Selecting previously unselected package python3-platformdirs. 637s Preparing to unpack .../59-python3-platformdirs_4.2.1-1_all.deb ... 637s Unpacking python3-platformdirs (4.2.1-1) ... 637s Selecting previously unselected package python3-jupyter-core. 637s Preparing to unpack .../60-python3-jupyter-core_5.3.2-2_all.deb ... 637s Unpacking python3-jupyter-core (5.3.2-2) ... 637s Selecting previously unselected package python3-nest-asyncio. 637s Preparing to unpack .../61-python3-nest-asyncio_1.5.4-1_all.deb ... 637s Unpacking python3-nest-asyncio (1.5.4-1) ... 637s Selecting previously unselected package python3-tornado. 637s Preparing to unpack .../62-python3-tornado_6.4.1-1_ppc64el.deb ... 637s Unpacking python3-tornado (6.4.1-1) ... 637s Selecting previously unselected package python3-py. 637s Preparing to unpack .../63-python3-py_1.11.0-2_all.deb ... 637s Unpacking python3-py (1.11.0-2) ... 637s Selecting previously unselected package python3-zmq. 637s Preparing to unpack .../64-python3-zmq_24.0.1-5build1_ppc64el.deb ... 637s Unpacking python3-zmq (24.0.1-5build1) ... 637s Selecting previously unselected package python3-jupyter-client. 637s Preparing to unpack .../65-python3-jupyter-client_7.4.9-2ubuntu1_all.deb ... 637s Unpacking python3-jupyter-client (7.4.9-2ubuntu1) ... 637s Selecting previously unselected package python3-packaging. 637s Preparing to unpack .../66-python3-packaging_24.0-1_all.deb ... 637s Unpacking python3-packaging (24.0-1) ... 637s Selecting previously unselected package python3-psutil. 637s Preparing to unpack .../67-python3-psutil_5.9.8-2build2_ppc64el.deb ... 637s Unpacking python3-psutil (5.9.8-2build2) ... 637s Selecting previously unselected package python3-ipykernel. 637s Preparing to unpack .../68-python3-ipykernel_6.29.3-1ubuntu1_all.deb ... 637s Unpacking python3-ipykernel (6.29.3-1ubuntu1) ... 637s Selecting previously unselected package python3-ipython-genutils. 637s Preparing to unpack .../69-python3-ipython-genutils_0.2.0-6_all.deb ... 637s Unpacking python3-ipython-genutils (0.2.0-6) ... 637s Selecting previously unselected package python3-jupyterlab-pygments. 637s Preparing to unpack .../70-python3-jupyterlab-pygments_0.2.2-3_all.deb ... 637s Unpacking python3-jupyterlab-pygments (0.2.2-3) ... 637s Selecting previously unselected package python3-mistune. 637s Preparing to unpack .../71-python3-mistune_3.0.2-1_all.deb ... 637s Unpacking python3-mistune (3.0.2-1) ... 637s Selecting previously unselected package python3-nbformat. 637s Preparing to unpack .../72-python3-nbformat_5.9.1-1_all.deb ... 637s Unpacking python3-nbformat (5.9.1-1) ... 637s Selecting previously unselected package python3-nbclient. 637s Preparing to unpack .../73-python3-nbclient_0.8.0-1_all.deb ... 637s Unpacking python3-nbclient (0.8.0-1) ... 637s Selecting previously unselected package python3-pandocfilters. 637s Preparing to unpack .../74-python3-pandocfilters_1.5.1-1_all.deb ... 637s Unpacking python3-pandocfilters (1.5.1-1) ... 637s Selecting previously unselected package python3-tinycss2. 637s Preparing to unpack .../75-python3-tinycss2_1.3.0-1_all.deb ... 637s Unpacking python3-tinycss2 (1.3.0-1) ... 637s Selecting previously unselected package python3-nbconvert. 637s Preparing to unpack .../76-python3-nbconvert_7.16.4-1_all.deb ... 637s Unpacking python3-nbconvert (7.16.4-1) ... 637s Selecting previously unselected package libjs-codemirror. 637s Preparing to unpack .../77-libjs-codemirror_5.65.0+~cs5.83.9-3_all.deb ... 637s Unpacking libjs-codemirror (5.65.0+~cs5.83.9-3) ... 637s Selecting previously unselected package libjs-marked. 637s Preparing to unpack .../78-libjs-marked_4.2.3+ds+~4.0.7-3_all.deb ... 637s Unpacking libjs-marked (4.2.3+ds+~4.0.7-3) ... 637s Selecting previously unselected package libjs-mathjax. 637s Preparing to unpack .../79-libjs-mathjax_2.7.9+dfsg-1_all.deb ... 637s Unpacking libjs-mathjax (2.7.9+dfsg-1) ... 639s Selecting previously unselected package libjs-requirejs. 639s Preparing to unpack .../80-libjs-requirejs_2.3.6+ds+~2.1.37-1_all.deb ... 639s Unpacking libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 639s Selecting previously unselected package libjs-requirejs-text. 639s Preparing to unpack .../81-libjs-requirejs-text_2.0.12-1.1_all.deb ... 639s Unpacking libjs-requirejs-text (2.0.12-1.1) ... 639s Selecting previously unselected package python3-terminado. 639s Preparing to unpack .../82-python3-terminado_0.18.1-1_all.deb ... 639s Unpacking python3-terminado (0.18.1-1) ... 639s Selecting previously unselected package python3-prometheus-client. 639s Preparing to unpack .../83-python3-prometheus-client_0.19.0+ds1-1_all.deb ... 639s Unpacking python3-prometheus-client (0.19.0+ds1-1) ... 639s Selecting previously unselected package python3-send2trash. 639s Preparing to unpack .../84-python3-send2trash_1.8.2-1_all.deb ... 639s Unpacking python3-send2trash (1.8.2-1) ... 639s Selecting previously unselected package python3-notebook. 639s Preparing to unpack .../85-python3-notebook_6.4.12-2.2ubuntu1_all.deb ... 639s Unpacking python3-notebook (6.4.12-2.2ubuntu1) ... 639s Selecting previously unselected package autopkgtest-satdep. 639s Preparing to unpack .../86-3-autopkgtest-satdep.deb ... 639s Unpacking autopkgtest-satdep (0) ... 639s Setting up python3-entrypoints (0.4-2) ... 639s Setting up libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 639s Setting up python3-tornado (6.4.1-1) ... 640s Setting up libnorm1t64:ppc64el (1.5.9+dfsg-3.1build1) ... 640s Setting up python3-pure-eval (0.2.2-2) ... 640s Setting up python3-send2trash (1.8.2-1) ... 640s Setting up fonts-mathjax (2.7.9+dfsg-1) ... 640s Setting up libsodium23:ppc64el (1.0.18-1build3) ... 640s Setting up libjs-mathjax (2.7.9+dfsg-1) ... 640s Setting up python3-py (1.11.0-2) ... 640s Setting up libdebuginfod-common (0.191-1) ... 640s Setting up libjs-requirejs-text (2.0.12-1.1) ... 640s Setting up python3-parso (0.8.3-1) ... 640s Setting up python3-defusedxml (0.7.1-2) ... 641s Setting up python3-ipython-genutils (0.2.0-6) ... 641s Setting up python3-asttokens (2.4.1-1) ... 641s Setting up fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 641s Setting up python3-all (3.12.3-0ubuntu1) ... 641s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 641s Setting up libjs-moment (2.29.4+ds-1) ... 641s Setting up python3-pandocfilters (1.5.1-1) ... 641s Setting up libjs-requirejs (2.3.6+ds+~2.1.37-1) ... 641s Setting up libjs-es6-promise (4.2.8-12) ... 641s Setting up libjs-text-encoding (0.7.0-5) ... 641s Setting up python3-webencodings (0.5.1-5) ... 641s Setting up python3-platformdirs (4.2.1-1) ... 642s Setting up python3-psutil (5.9.8-2build2) ... 642s Setting up libsource-highlight-common (3.1.9-4.3build1) ... 642s Setting up python3-jupyterlab-pygments (0.2.2-3) ... 642s Setting up libpython3.12t64:ppc64el (3.12.4-1) ... 642s Setting up libpgm-5.3-0t64:ppc64el (5.3.128~dfsg-2.1build1) ... 642s Setting up python3-decorator (5.1.1-5) ... 642s Setting up python3-packaging (24.0-1) ... 642s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 643s Setting up node-jed (1.1.1-4) ... 643s Setting up python3-typeshed (0.0~git20231111.6764465-3) ... 643s Setting up python3-executing (2.0.1-0.1) ... 643s Setting up libjs-xterm (5.3.0-2) ... 643s Setting up python3-nest-asyncio (1.5.4-1) ... 643s Setting up python3-bytecode (0.15.1-3) ... 643s Setting up libjs-codemirror (5.65.0+~cs5.83.9-3) ... 643s Setting up libjs-jed (1.1.1-4) ... 643s Setting up python3-html5lib (1.1-6) ... 643s Setting up libbabeltrace1:ppc64el (1.5.11-3build3) ... 643s Setting up python3-fastjsonschema (2.19.1-1) ... 643s Setting up python3-traitlets (5.14.3-1) ... 644s Setting up python-tinycss2-common (1.3.0-1) ... 644s Setting up python3-argon2 (21.1.0-2build1) ... 644s Setting up python3-dateutil (2.9.0-2) ... 644s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 644s Setting up python3-mistune (3.0.2-1) ... 644s Setting up python3-stack-data (0.6.3-1) ... 644s Setting up python3-soupsieve (2.5-1) ... 644s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 644s Setting up python3-jupyter-core (5.3.2-2) ... 645s Setting up libjs-bootstrap (3.4.1+dfsg-3) ... 645s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 645s Setting up python3-ptyprocess (0.7.0-5) ... 645s Setting up libjs-marked (4.2.3+ds+~4.0.7-3) ... 645s Setting up python3-prompt-toolkit (3.0.46-1) ... 645s Setting up libdebuginfod1t64:ppc64el (0.191-1) ... 645s Setting up python3-tinycss2 (1.3.0-1) ... 645s Setting up libzmq5:ppc64el (4.3.5-1build2) ... 645s Setting up python3-jedi (0.19.1+ds1-1) ... 646s Setting up libjs-bootstrap-tour (0.12.0+dfsg-5) ... 646s Setting up libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 646s Setting up libsource-highlight4t64:ppc64el (3.1.9-4.3build1) ... 646s Setting up python3-nbformat (5.9.1-1) ... 646s Setting up python3-bs4 (4.12.3-1) ... 646s Setting up python3-bleach (6.1.0-2) ... 646s Setting up python3-matplotlib-inline (0.1.6-2) ... 646s Setting up python3-comm (0.2.1-1) ... 646s Setting up python3-prometheus-client (0.19.0+ds1-1) ... 647s Setting up gdb (15.0.50.20240403-0ubuntu1) ... 647s Setting up libjs-jquery-ui (1.13.2+dfsg-1) ... 647s Setting up python3-pexpect (4.9-2) ... 647s Setting up python3-zmq (24.0.1-5build1) ... 647s Setting up python3-terminado (0.18.1-1) ... 647s Setting up python3-jupyter-client (7.4.9-2ubuntu1) ... 647s Setting up python3-pydevd (2.10.0+ds-10ubuntu1) ... 648s Setting up python3-debugpy (1.8.0+ds-4ubuntu4) ... 648s Setting up python3-nbclient (0.8.0-1) ... 648s Setting up python3-ipython (8.20.0-1ubuntu1) ... 649s Setting up python3-ipykernel (6.29.3-1ubuntu1) ... 649s Setting up python3-nbconvert (7.16.4-1) ... 650s Setting up python3-notebook (6.4.12-2.2ubuntu1) ... 650s Setting up autopkgtest-satdep (0) ... 650s Processing triggers for man-db (2.12.1-2) ... 650s Processing triggers for libc-bin (2.39-0ubuntu9) ... 654s (Reading database ... 88878 files and directories currently installed.) 654s Removing autopkgtest-satdep (0) ... 658s autopkgtest [10:36:29]: test autodep8-python3: set -e ; for py in $(py3versions -r 2>/dev/null) ; do cd "$AUTOPKGTEST_TMP" ; echo "Testing with $py:" ; $py -c "import notebook; print(notebook)" ; done 658s autopkgtest [10:36:29]: test autodep8-python3: [----------------------- 658s Testing with python3.12: 658s 659s autopkgtest [10:36:30]: test autodep8-python3: -----------------------] 659s autodep8-python3 PASS (superficial) 659s autopkgtest [10:36:30]: test autodep8-python3: - - - - - - - - - - results - - - - - - - - - - 659s autopkgtest [10:36:30]: @@@@@@@@@@@@@@@@@@@@ summary 659s pytest FAIL non-zero exit status 1 659s command1 PASS (superficial) 659s autodep8-python3 PASS (superficial) 685s nova [W] Using flock in scalingstack-bos01-ppc64el 685s Creating nova instance adt-oracular-ppc64el-jupyter-notebook-20240616-102531-juju-7f2275-prod-proposed-migration-environment-3-f7666d8f-c4c0-4137-95eb-491025808bae from image adt/ubuntu-oracular-ppc64el-server-20240616.img (UUID 9b457a1c-2888-49ee-9317-eaf7cec2f603)... 685s nova [W] Using flock in scalingstack-bos01-ppc64el 685s Creating nova instance adt-oracular-ppc64el-jupyter-notebook-20240616-102531-juju-7f2275-prod-proposed-migration-environment-3-f7666d8f-c4c0-4137-95eb-491025808bae from image adt/ubuntu-oracular-ppc64el-server-20240616.img (UUID 9b457a1c-2888-49ee-9317-eaf7cec2f603)... 685s nova [W] Using flock in scalingstack-bos01-ppc64el 685s Creating nova instance adt-oracular-ppc64el-jupyter-notebook-20240616-102531-juju-7f2275-prod-proposed-migration-environment-3-f7666d8f-c4c0-4137-95eb-491025808bae from image adt/ubuntu-oracular-ppc64el-server-20240616.img (UUID 9b457a1c-2888-49ee-9317-eaf7cec2f603)...